Test-Driven Development (eBook)

An Empirical Evaluation of Agile Practice

(Autor)

eBook Download: PDF
2009 | 2010
XX, 245 Seiten
Springer Berlin (Verlag)
978-3-642-04288-1 (ISBN)

Lese- und Medienproben

Test-Driven Development - Lech Madeyski
Systemvoraussetzungen
53,49 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Agile methods are gaining more and more interest both in industry and in research. Many industries are transforming their way of working from traditional waterfall projects with long duration to more incremental, iterative and agile practices. At the same time, the need to evaluate and to obtain evidence for different processes, methods and tools has been emphasized.

Lech Madeyski offers the first in-depth evaluation of agile methods. He presents in detail the results of three different experiments, including concrete examples of how to conduct statistical analysis with meta analysis or the SPSS package, using as evaluation indicators the number of acceptance tests passed (overall and per hour) and design complexity metrics.

The book is appropriate for graduate students, researchers and advanced professionals in software engineering. It proves the real benefits of agile software development, provides readers with in-depth insights into experimental methods in the context of agile development, and discusses various validity threats in empirical studies.



Lech Madeyski is Assistant Professor in the Software Engineering Department, Institute of Informatics, Wroclaw University of Technology, Poland. His current research interests include: experimentation in software engineering, software metrics and models, software quality and testing, software products and process improvement, and agile software development methodologies (e.g., eXtreme Programming).

He has published research papers in refereed software engineering journals (e.g., IET Software, Journal of Software Process: Improvement and Practice) and conferences (e.g., PROFES, XP, EuroSPI, CEE-SET). He has been a member of the program, steering, or organization committee for several software engineering conferences such as PROFES (International Conference on Product Focused Software Process Improvement), ENASE (International Working Conference on Evaluation of Novel Approaches to Software Engineering), CEE-SET (Central and East-European Conference on Software Engineering Techniques), and BPSC (International Working Conference on Business Process and Services Computing).

His paper at PROFES 2007 received the Best Paper Award.

Lech Madeyski is Assistant Professor in the Software Engineering Department, Institute of Informatics, Wroclaw University of Technology, Poland. His current research interests include: experimentation in software engineering, software metrics and models, software quality and testing, software products and process improvement, and agile software development methodologies (e.g., eXtreme Programming).He has published research papers in refereed software engineering journals (e.g., IET Software, Journal of Software Process: Improvement and Practice) and conferences (e.g., PROFES, XP, EuroSPI, CEE-SET). He has been a member of the program, steering, or organization committee for several software engineering conferences such as PROFES (International Conference on Product Focused Software Process Improvement), ENASE (International Working Conference on Evaluation of Novel Approaches to Software Engineering), CEE-SET (Central and East-European Conference on Software Engineering Techniques), and BPSC (International Working Conference on Business Process and Services Computing).His paper at PROFES 2007 received the Best Paper Award.

Foreword 5
Preface 6
Acknowledgements 10
Contents 12
Acronyms 16
to 1 Introduction 18
1.1 Test-First Programming 18
1.1.1 Mechanisms Behind Test-First Programming that Motivate Research 19
1.2 Research Methodology 21
1.2.1 Empirical Software Engineering 21
1.2.2 Empirical Methods 22
1.2.2.1 Qualitative and Quantitative Research Paradigms 22
1.2.2.2 Fixed and Flexible Research Designs 22
1.2.2.3 Empirical Strategies 23
1.2.2.4 Between-Groups and Repeated Measures Experimental Designs 24
1.3 Software Measurement 25
1.3.1 Measurement Levels 25
1.3.2 Software Product Quality 26
1.3.2.1 ISO/IEC 9126 26
1.3.2.2 Test Code Metrics 28
1.3.2.3 Validity of Software Quality Standards 28
1.3.3 Software Development Productivity 29
1.4 Research Questions 30
1.5 Book Organization 30
1.6 Claimed Contributions 31
to 2 Related Work in Industrial and Academic Environments 32
2.1 Test-First Programming 32
2.2 Pair Programming 37
2.3 Summary 40
to 3 Research Goals, Conceptual Model and Variables Selection 42
3.1 Goals Definition 42
3.2 Conceptual Model 43
3.3 Variables Selection 45
3.3.1 Independent Variable (IV) 45
3.3.1.1 Test-First and Test-Last Programming 45
3.3.1.2 Pair Programming and Solo Programming 47
3.3.2 Dependent Variables (DVs) --- From Goals to Dependent Variables 49
3.3.2.1 External Code Quality 49
3.3.2.2 Internal Code Quality 50
3.3.2.3 Development Speed 51
3.3.2.4 Thoroughness and Fault Detection Effectiveness of Unit Tests 51
3.3.3 Confounding Variables 53
to 4 Experiments Planning, Execution and Analysis Procedure 55
4.1 Context Information 55
4.2 Hypotheses 57
4.3 Measurement Tools 58
4.3.1 Aopmetrics 58
4.3.2 ActivitySensor and SmartSensor Plugins 59
4.3.3 Judy 59
4.4 Experiment Accounting 60
4.4.1 Goals 60
4.4.2 Subjects 60
4.4.3 Experimental Materials 60
4.4.4 Experimental Task 61
4.4.5 Hypotheses and Variables 61
4.4.6 Design of the Experiment 61
4.4.7 Experiment Operation 62
4.4.7.1 Preparation Phase 62
4.4.7.2 Execution Phase 62
4.5 Experiment Submission 62
4.5.1 Goals 63
4.5.2 Subjects 63
4.5.3 Experimental Materials 63
4.5.4 Experimental Task 64
4.5.5 Hypotheses and Variables 64
4.5.6 Design of the Experiment 64
4.5.7 Experiment Operation 64
4.5.7.1 Pre-study 65
4.5.7.2 Preparation Phase 65
4.5.7.3 Execution Phase 65
4.6 Experiment Smells& Library
4.6.1 Goals 66
4.6.2 Subjects 66
4.6.3 Experimental Materials 67
4.6.4 Experimental Tasks 67
4.6.5 Hypotheses and Variables 67
4.6.6 Design of the Experiment 68
4.6.7 Experiment Operation 68
4.6.7.1 Preparation Phase 68
4.6.7.2 Execution Phase 68
4.7 Analysis Procedure 69
4.7.1 Descriptive Statistics 69
4.7.2 Assumptions of Parametric Tests 69
4.7.3 Carry-Over Effect 70
4.7.4 Hypotheses Testing 71
4.7.5 Effect Sizes 71
4.7.6 Analysis of Covariance 72
4.7.6.1 Non-Parametric Analysis of Covariance 73
4.7.7 Process Conformance and Selective Analysis 73
4.7.8 Combining Empirical Evidence 76
to 5 Effect on the Percentage of Acceptance Tests Passed 77
5.1 Analysis of Experiment Accounting 77
5.1.1 Preliminary Analysis 77
5.1.1.1 Descriptive Statistics 78
5.1.1.2 Assumption Testing 80
5.1.1.3 Non-Parametric Analysis 81
5.1.1.4 Parametric Analysis 89
5.1.2 Selective Analysis 101
5.1.2.1 Descriptive Statistics 101
5.1.2.2 Assumption Testing 103
5.1.2.3 Analysis using Kruskal--Wallis and Mann--Whitney Tests 103
5.1.2.4 Rank-Transformed Analysis of Covariance 108
5.2 Analysis of Experiment Submission 117
5.2.1 Preliminary Analysis 117
5.2.1.1 Descriptive Statistics 117
5.2.1.2 Assumption Testing 118
5.2.1.3 Independent t-Test 119
5.2.1.4 Analysis of Variance 121
5.2.1.5 Analysis of Covariance 122
5.2.2 Selective Analysis 126
5.2.2.1 Descriptive Statistics 126
5.2.2.2 Assumption Testing 127
5.2.2.3 Analysis of Variance 128
5.2.2.4 Analysis of Covariance 129
5.3 Analysis of Experiment Smells& Library
5.3.1 Preliminary Analysis 133
5.3.1.1 Descriptive Statistics 133
5.3.1.2 Assumption Testing 135
5.3.1.3 Wilcoxon Signed-Rank Test 135
5.3.2 Selective Analysis 137
5.3.2.1 Descriptive Statistics 137
5.3.2.2 Assumption Testing 137
5.3.2.3 Wilcoxon Signed-Rank Test 139
5.4 Instead of Summary 141
to 6 Effect on the Number of Acceptance Tests Passed per Hour 142
6.1 Analysis of Experiment Accounting 142
6.1.1 Descriptive Statistics 143
6.1.2 Non-Parametric Analysis 143
6.2 Analysis of Experiment Submission 144
6.2.1 Descriptive Statistics 144
6.2.2 Assumption Testing 146
6.2.3 Non-Parametric Analysis 146
6.2.3.1 Mann--Whitney Test 146
6.2.3.2 Rank-Transformed Analysis of Covariance 147
6.3 Analysis of Experiment Smells& Library
6.3.1 Descriptive Statistics 151
6.3.2 Assumption Testing 153
6.3.3 Non-Parametric Analysis 154
6.3.3.1 Wilcoxon Signed-Rank Test 154
6.4 Instead of Summary 155
to 7 Effect on Internal Quality Indicators 156
7.1 Confounding Effect of Class Size on the Validity of Object-Oriented Metrics 156
7.2 Analysis of Experiment Accounting 157
7.2.1 Descriptive Statistics 157
7.2.2 Assumption Testing 160
7.2.3 Mann--Whitney Tests 160
7.2.3.1 Calculating Effect Size 161
7.2.3.2 Summary 162
7.3 Analysis of Experiment Submission 162
7.3.1 Descriptive Statistics 162
7.3.2 Assumption Testing 165
7.3.3 Independent t-Test 165
7.3.3.1 Calculating Effect Size 166
7.3.3.2 Summary 167
7.4 Analysis of Experiment Smells& Library
7.4.1 Descriptive Statistics 167
7.4.2 Assumption Testing 168
7.4.3 Dependent t-Test 170
7.4.3.1 Calculating Effect Size 171
7.4.3.2 Summary 172
7.5 Instead of Summary 173
to 8 Effects on Unit Tests -- Preliminary Analysis 174
8.1 Analysis of Experiment Submission 175
8.1.1 Descriptive Statistics 175
8.1.2 Assumption Testing 177
8.1.3 Mann--Whitney Test 178
8.1.3.1 Calculating Effect Size 178
8.1.3.2 Summary 179
to 9 Meta-Analysis 180
9.1 Introduction to Meta-Analysis 181
9.1.1 Combining p-Values Across Experiments 181
9.1.2 Combining Effect Sizes Across Experiments 182
9.1.2.1 Fixed Effects Model 183
9.1.2.2 Homogeneity Analysis 184
9.1.2.3 Random Effects Model 185
9.2 Preliminary Meta-Analysis 186
9.2.1 Combining Effects on the Percentage of Acceptance Tests Passed (PATP) 186
9.2.1.1 Combining p-Values Across Experiments 186
9.2.1.2 Combining Effect Sizes Across Experiments -- Fixed Effects Model 187
9.2.1.3 Combining Effect Sizes Across Experiments -- Random Effects Model 189
9.2.1.4 Summary 189
9.2.2 Combining Effects on the Number of Acceptance Tests Passed Per Development Hour (NATPPH) 190
9.2.2.1 Combining p-Values Across Experiments 190
9.2.2.2 Combining Effect Sizes Across Experiments -- Fixed Effects Model 190
9.2.2.3 Combining Effect Sizes Across Experiments -- Random Effects Model 191
9.2.2.4 Summary 192
9.2.3 Combining Effects on Design Complexity 192
9.2.3.1 Combining p-Values Across Experiments 192
9.2.3.2 Combining Effect Sizes Across Experiments -- Fixed Effects Model 194
9.2.3.3 Combining Effect Sizes Across Experiments -- Random Effects Model 196
9.2.3.4 Summary 198
9.3 Selective Meta-Analysis 199
9.3.1 Combining Effects on the Percentage of Acceptance Tests Passed (PATP) 200
9.3.1.1 Combining p-Values Across Experiments 200
9.3.1.2 Combining Effect Sizes Across Experiments -- Fixed Effects Model 200
9.3.1.3 Combining Effect Sizes Across Experiments -- Random Effects Model 201
9.3.1.4 Summary 201
9.3.2 Combining Effects on the Number of Acceptance Tests Passed Per Hour (NATPPH) 202
9.3.2.1 Combining p-Values Across Experiments 202
9.3.2.2 Combining Effect Sizes Across Experiments -- Fixed Effects Model 202
9.3.2.3 Combining Effect Sizes Across Experiments -- Random Effects Model 203
9.3.2.4 Summary 203
9.3.3 Combining Effects on Design Complexity 203
9.3.3.1 Combining p-Values Across Experiments 204
9.3.3.2 Combining Effect Sizes Across Experiments -- Fixed Effects Model 205
9.3.3.3 Combining Effect Sizes Across Experiments -- Random Effects Model 208
9.3.3.4 Summary 210
to 10 Discussion, Conclusions and Future Work 211
10.1 Overview of Results 211
10.2 Rules of Thumb for Industry Practitioners 214
10.3 Explaining Plausible Mechanisms Behind the Results 216
10.4 Contributions 219
10.5 Threats to Validity 220
10.5.1 Statistical Conclusion Validity 220
10.5.1.1 Low Statistical Power 221
10.5.1.2 Violated Assumptions of Statistical Tests 221
10.5.1.3 Fishing and the Error Rate Problem 221
10.5.1.4 Reliability of Measures 222
10.5.1.5 Restriction of Range 222
10.5.1.6 Reliability of Treatment Implementation 222
10.5.1.7 Random Irrelevancies in Experimental Setting 222
10.5.1.8 Random Heterogeneity of Subjects 222
10.5.1.9 Inaccurate Effect Size Estimation 223
10.5.2 Internal Validity 223
10.5.2.1 Ambiguous Temporal Precedence 223
10.5.2.2 Selection 223
10.5.2.3 History 223
10.5.2.4 Maturity 223
10.5.2.5 Regression Artefacts 224
10.5.2.6 Attrition 224
10.5.2.7 Testing 224
10.5.2.8 Instrumentation 224
10.5.2.9 Additive and Interactive Effects of Threats 224
10.5.3 Construct Validity 225
10.5.3.1 Mono-Operation Bias 225
10.5.3.2 Mono-method Bias 225
10.5.3.3 Construct Confounding 225
10.5.3.4 Confounding Constructs with Levels of Constructs 226
10.5.3.5 Reactivity to the Experimental Situation and Hypothesis Guessing 226
10.5.3.6 Experimenter Expectancies 226
10.5.3.7 Compensatory Equalization 226
10.5.3.8 Compensatory Rivalry and Resentful Demoralization 226
10.5.3.9 Treatment Diffusion 227
10.5.4 External Validity 227
10.5.4.1 Generalization to Industrial Setting 227
10.5.4.2 Relevance to Industry 229
10.5.5 Threats to Validity of Meta-Analysis 230
10.5.5.1 Inadequate Conceptualization of the Problem 230
10.5.5.2 Inadequate Assessment of Study Quality 231
10.5.5.3 Publication Bias 231
10.5.5.4 Dissemination Bias 231
10.6 Conclusions and Future Work 231
Appendix 233
Glossary 237
References 241
Index 256

Erscheint lt. Verlag 5.12.2009
Zusatzinfo XX, 245 p.
Verlagsort Berlin
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Software Entwicklung
Wirtschaft Betriebswirtschaft / Management Wirtschaftsinformatik
Schlagworte Agile method • agile programming • Complexity • Design • Development • empirical software engineering • Extreme Programming • Meta Analysis • pair programming • Software • Software engineering • software measurement • Test-Driven Development • Test-First Programming • Test-First Programming;
ISBN-10 3-642-04288-0 / 3642042880
ISBN-13 978-3-642-04288-1 / 9783642042881
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 5,0 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Das umfassende Handbuch

von Jürgen Sieben

eBook Download (2023)
Rheinwerk Computing (Verlag)
89,90
Eine kompakte Einführung

von Brendan Burns; Joe Beda; Kelsey Hightower; Lachlan Evenson

eBook Download (2023)
dpunkt (Verlag)
39,90
Mini-Refactorings für besseres Software-Design

von Kent Beck

eBook Download (2024)
O'Reilly Verlag
26,90