Measuring the User Experience (eBook)
336 Seiten
Elsevier Science (Verlag)
978-0-08-055826-4 (ISBN)
Measuring the User Experience provides the first single source of practical information to enable usability professionals and product developers to effectively measure the usability of any product by choosing the right metric, applying it, and effectively using the information it reveals.
Authors Tullis and Albert organize dozens of metrics into six categories: performance, issues-based, self-reported, web navigation, derived, and behavioral/physiological. They explore each metric, considering best methods for collecting, analyzing, and presenting the data. They provide step-by-step guidance for measuring the usability of any type of product using any type of technology.
This book is recommended for usability professionals, developers, programmers, information architects, interaction designers, market researchers, and students in an HCI or HFE program.
• Presents criteria for selecting the most appropriate metric for every case• Takes a product and technology neutral approach
• Presents in-depth case studies to show how organizations have successfully used the metrics and the information they revealed
Tom Tullis is Vice President of Usability and User Insight at Fidelity Investments and Adjunct Professor at Bentley University in the Human Factors in Information Design program. He joined Fidelity in 1993 and was instrumental in the development of the company's usability department, including a state-of-the-art Usability Lab. Prior to joining Fidelity, he held positions at Canon Information Systems, McDonnell Douglas, Unisys Corporation, and Bell Laboratories. He and Fidelity's usability team have been featured in a number of publications, including Newsweek , Business 2.0 , Money , The Boston Globe , The Wall Street Journal , and The New York Times.
Measuring the User Experience provides the first single source of practical information to enable usability professionals and product developers to effectively measure the usability of any product by choosing the right metric, applying it, and effectively using the information it reveals. Authors Tullis and Albert organize dozens of metrics into six categories: performance, issues-based, self-reported, web navigation, derived, and behavioral/physiological. They explore each metric, considering best methods for collecting, analyzing, and presenting the data. They provide step-by-step guidance for measuring the usability of any type of product using any type of technology. This book is recommended for usability professionals, developers, programmers, information architects, interaction designers, market researchers, and students in an HCI or HFE program. Presents criteria for selecting the most appropriate metric for every case Takes a product and technology neutral approach Presents in-depth case studies to show how organizations have successfully used the metrics and the information they revealed
Front Cover 1
Measuring the User Experience 4
Copyright Page 5
Table of Contents 8
Preface 16
Acknowledgments 18
CHAPTER 1 Introduction 20
1.1 Organization of This Book 21
1.2 What Is Usability? 23
1.3 Why Does Usability Matter? 24
1.4 What Are Usability Metrics? 26
1.5 The Value of Usability Metrics 27
1.6 Ten Common Myths about Usability Metrics 29
CHAPTER 2 Background 34
2.1 Designing a Usability Study 34
2.1.1 Selecting Participants 35
2.1.2 Sample Size 36
2.1.3 Within-Subjects or Between-Subjects Study 37
2.1.4 Counterbalancing 38
2.1.5 Independent and Dependent Variables 39
2.2 Types of Data 39
2.2.1 Nominal Data 39
2.2.2 Ordinal Data 40
2.2.3 Interval Data 41
2.2.4 Ratio Data 42
2.3 Metrics and Data 42
2.4 Descriptive Statistics 43
2.4.1 Measures of Central Tendency 44
2.4.2 Measures of Variability 45
2.4.3 Confidence Intervals 46
2.5 Comparing Means 47
2.5.1 Independent Samples 47
2.5.2 Paired Samples 48
2.5.3 Comparing More Than Two Samples 49
2.6 Relationships between Variables 50
2.6.1 Correlations 51
2.7 Nonparametric Tests 52
2.7.1 The Chi-Square Test 52
2.8 Presenting Your Data Graphically 54
2.8.1 Column or Bar Graphs 55
2.8.2 Line Graphs 57
2.8.3 Scatterplots 59
2.8.4 Pie Charts 61
2.8.5 Stacked Bar Graphs 61
2.9 Summary 63
CHAPTER 3 Planning a Usability Study 64
3.1 Study Goals 64
3.1.1 Formative Usability 64
3.1.2 Summative Usability 65
3.2 User Goals 66
3.2.1 Performance 66
3.2.2 Satisfaction 66
3.3 Choosing the Right Metrics: Ten Types of Usability Studies 67
3.3.1 Completing a Transaction 67
3.3.2 Comparing Products 69
3.3.3 Evaluating Frequent Use of the Same Product 69
3.3.4 Evaluating Navigation and/or Information Architecture 70
3.3.5 Increasing Awareness 71
3.3.6 Problem Discovery 71
3.3.7 Maximizing Usability for a Critical Product 72
3.3.8 Creating an Overall Positive User Experience 73
3.3.9 Evaluating the Impact of Subtle Changes 73
3.3.10 Comparing Alternative Designs 74
3.4 Other Study Details 74
3.4.1 Budgets and Timelines 74
3.4.2 Evaluation Methods 76
3.4.3 Participants 77
3.4.4 Data Collection 78
3.4.5 Data Cleanup 79
3.5 Summary 80
CHAPTER 4 Performance Metrics 82
4.1 Task Success 83
4.1.1 Collecting Any Type of Success Metric 84
4.1.2 Binary Success 85
4.1.3 Levels of Success 88
4.1.4 Issues in Measuring Success 92
4.2 Time-on-Task 93
4.2.1 Importance of Measuring Time-on-Task 93
4.2.2 How to Collect and Measure Time-on-Task 93
4.2.3 Analyzing and Presenting Time-on-Task Data 96
4.2.4 Issues to Consider When Using Time Data 98
4.3 Errors 100
4.3.1 When to Measure Errors 100
4.3.2 What Constitutes an Error? 101
4.3.3 Collecting and Measuring Errors 102
4.3.4 Analyzing and Presenting Errors 103
4.3.5 Issues to Consider When Using Error Metrics 105
4.4 Efficiency 106
4.4.1 Collecting and Measuring Efficiency 106
4.4.2 Analyzing and Presenting Efficiency Data 107
4.4.3 Efficiency as a Combination of Task Success and Time 109
4.5 Learnability 111
4.5.1 Collecting and Measuring Learnability Data 112
4.5.2 Analyzing and Presenting Learnability Data 113
4.5.3 Issues to Consider When Measuring Learnability 115
4.6 Summary 116
CHAPTER 5 Issues-Based Metrics 118
5.1 Identifying Usability Issues 118
5.2 What Is a Usability Issue? 119
5.2.1 Real Issues versus False Issues 120
5.3 How to Identify an Issue 121
5.3.1 In-Person Studies 122
5.3.2 Automated Studies 122
5.3.3 When Issues Begin and End 122
5.3.4 Granularity 123
5.3.5 Multiple Observers 123
5.4 Severity Ratings 124
5.4.1 Severity Ratings Based on the User Experience 124
5.4.2 Severity Ratings Based on a Combination of Factors 125
5.4.3 Using a Severity Rating System 126
5.4.4 Some Caveats about Severity Ratings 127
5.5 Analyzing and Reporting Metrics for Usability Issues 127
5.5.1 Frequency of Unique Issues 128
5.5.2 Frequency of Issues per Participant 130
5.5.3 Frequency of Participants 130
5.5.4 Issues by Category 131
5.5.5 Issues by Task 132
5.5.6 Reporting Positive Issues 133
5.6 Consistency in Identifying Usability Issues 133
5.7 Bias in Identifying Usability Issues 135
5.8 Number of Participants 136
5.8.1 Five Participants Is Enough 137
5.8.2 Five Participants Is Not Enough 138
5.8.3 Our Recommendation 138
5.9 Summary 140
CHAPTER 6 Self-Reported Metrics 142
6.1 Importance of Self-Reported Data 142
6.2 Collecting Self-Reported Data 143
6.2.1 Likert Scales 143
6.2.2 Semantic Differential Scales 144
6.2.3 When to Collect Self-Reported Data 144
6.2.4 How to Collect Self-Reported Data 145
6.2.5 Biases in Collecting Self-Reported Data 145
6.2.6 General Guidelines for Rating Scales 146
6.2.7 Analyzing Self-Reported Data 146
6.3 Post-Task Ratings 147
6.3.1 Ease of Use 147
6.3.2 After-Scenario Questionnaire 148
6.3.3 Expectation Measure 148
6.3.4 Usability Magnitude Estimation 151
6.3.5 Comparison of Post-Task Self-Reported Metrics 152
6.4 Post-Session Ratings 154
6.4.1 Aggregating Individual Task Ratings 156
6.4.2 System Usability Scale 157
6.4.3 Computer System Usability Questionnaire 158
6.4.4 Questionnaire for User Interface Satisfaction 158
6.4.5 Usefulness, Satisfaction, and Ease of Use Questionnaire 161
6.4.6 Product Reaction Cards 161
6.4.7 Comparison of Post-Session Self-Reported Metrics 163
6.5 Using SUS to Compare Designs 166
6.5.1 Comparison of ‘‘Senior-Friendly’’ Websites 166
6.5.2 Comparison of Windows ME and Windows XP 166
6.5.3 Comparison of Paper Ballots 167
6.6 Online Services 169
6.6.1 Website Analysis and Measurement Inventory 169
6.6.2 American Customer Satisfaction Index 170
6.6.3 OpinionLab 172
6.6.4 Issues with Live-Site Surveys 176
6.7 Other Types of Self-Reported Metrics 177
6.7.1 Assessing Specific Attributes 177
6.7.2 Assessing Specific Elements 180
6.7.3 Open-Ended Questions 181
6.7.4 Awareness and Comprehension 182
6.7.5 Awareness and Usefulness Gaps 184
6.8 Summary 185
CHAPTER 7 Behavioral and Physiological Metrics 186
7.1 Observing and Coding Overt Behaviors 186
7.1.1 Verbal Behaviors 187
7.1.2 Nonverbal Behaviors 188
7.2 Behaviors Requiring Equipment to Capture 190
7.2.1 Facial Expressions 190
7.2.2 Eye-Tracking 194
7.2.3 Pupillary Response 199
7.2.4 Skin Conductance and Heart Rate 202
7.2.5 Other Measures 205
7.3 Summary 207
CHAPTER 8 Combined and Comparative Metrics 210
8.1 Single Usability Scores 210
8.1.1 Combining Metrics Based on Target Goals 211
8.1.2 Combining Metrics Based on Percentages 212
8.1.3 Combining Metrics Based on z-Scores 217
8.1.4 Using SUM: Single Usability Metric 221
8.2 Usability Scorecards 222
8.3 Comparison to Goals and Expert Performance 225
8.3.1 Comparison to Goals 225
8.3.2 Comparison to Expert Performance 227
8.4 Summary 229
CHAPTER 9 Special Topics 230
9.1 Live Website Data 230
9.1.1 Server Logs 230
9.1.2 Click-Through Rates 232
9.1.3 Drop-Off Rates 234
9.1.4 A/B Studies 235
9.2 Card-Sorting Data 236
9.2.1 Analyses of Open Card-Sort Data 237
9.2.2 Analyses of Closed Card-Sort Data 244
9.3 Accessibility Data 246
9.4 Return-on-Investment Data 250
9.5 Six Sigma 253
9.6 Summary 255
CHAPTER 10 Case Studies 256
10.1 Redesigning a Website Cheaply and Quickly 256
10.1.1 Phase 1: Testing Competitor Websites 256
10.1.2 Phase 2: Testing Three Different Design Concepts 258
10.1.3 Phase 3: Testing a Single Design 262
10.1.4 Conclusion 263
10.1.5 Biography 263
10.2 Usability Evaluation of a Speech Recognition IVR 263
10.2.1 Method 263
10.2.2 Results: Task-Level Measurements 264
10.2.3 PSSUQ 265
10.2.4 Participant Comments 265
10.2.5 Usability Problems 266
10.2.6 Adequacy of Sample Size 266
10.2.7 Recommendations Based on Participant Behaviors and Comments 269
10.2.8 Discussion 270
10.2.9 Biography 270
10.2.10 References 271
10.3 Redesign of the CDC.gov Website 271
10.3.1 Usability Testing Levels 272
10.3.2 Baseline Test 272
10.3.3 Task Scenarios 273
10.3.4 Qualitative Findings 274
10.3.5 Wireframing and FirstClick Testing 275
10.3.6 Final Prototype Testing (Prelaunch Test) 277
10.3.7 Conclusions 280
10.3.8 Biographies 281
10.3.9 References 281
10.4 Usability Benchmarking: Mobile Music and Video 282
10.4.1 Project Goals and Methods 282
10.4.2 Qualitative and Quantitative Data 282
10.4.3 Research Domain 282
10.4.4 Comparative Analysis 283
10.4.5 Study Operations: Number of Respondents 283
10.4.6 Respondent Recruiting 284
10.4.7 Data Collection 284
10.4.8 Time to Complete 285
10.4.9 Success or Failure 285
10.4.10 Number of Attempts 285
10.4.11 Perception Metrics 285
10.4.12 Qualitative Findings 286
10.4.13 Quantitative Findings 286
10.4.14 Summary Findings and SUM Metrics 286
10.4.15 Data Manipulation and Visualization 286
10.4.16 Discussion 288
10.4.17 Benchmark Changes and Future Work 289
10.4.18 Biographies 289
10.4.19 References 289
10.5 Measuring the Effects of Drug Label Design and Similarity on Pharmacists’ Performance 290
10.5.1 Participants 291
10.5.2 Apparatus 291
10.5.3 Stimuli 291
10.5.4 Procedure 294
10.5.5 Analysis 295
10.5.6 Results and Discussion 296
10.5.7 Biography 298
10.5.8 References 298
10.6 Making Metrics Matter 299
10.6.1 OneStart: Indiana University’s Enterprise Portal Project 299
10.6.2 Designing and Conducting the Study 300
10.6.3 Analyzing and Interpreting the Results 301
10.6.4 Sharing the Findings and Recommendations 302
10.6.5 Reflecting on the Impact 305
10.6.6 Conclusion 306
10.6.7 Acknowledgment 306
10.6.8 Biography 306
10.6.9 References 306
CHAPTER 11 Moving Forward 308
11.1 Sell Usability and the Power of Metrics 308
11.2 Start Small and Work Your Way Up 309
11.3 Make Sure You Have the Time and Money 310
11.4 Plan Early and Often 311
11.5 Benchmark Your Products 312
11.6 Explore Your Data 313
11.7 Speak the Language of Business 314
11.8 Show Your Confidence 314
11.9 Don’t Misuse Metrics 315
11.10 Simplify Your Presentation 316
References 318
Index 326
Erscheint lt. Verlag | 27.7.2010 |
---|---|
Sprache | englisch |
Themenwelt | Sachbuch/Ratgeber |
Mathematik / Informatik ► Informatik ► Betriebssysteme / Server | |
Informatik ► Software Entwicklung ► User Interfaces (HCI) | |
ISBN-10 | 0-08-055826-7 / 0080558267 |
ISBN-13 | 978-0-08-055826-4 / 9780080558264 |
Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: PDF (Portable Document Format)
Mit einem festen Seitenlayout eignet sich die PDF besonders für Fachbücher mit Spalten, Tabellen und Abbildungen. Eine PDF kann auf fast allen Geräten angezeigt werden, ist aber für kleine Displays (Smartphone, eReader) nur eingeschränkt geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich