Performance Evaluation and Benchmarking of Intelligent Systems (eBook)

eBook Download: PDF
2009 | 2009
XIX, 338 Seiten
Springer US (Verlag)
978-1-4419-0492-8 (ISBN)

Lese- und Medienproben

Performance Evaluation and Benchmarking of Intelligent Systems -
Systemvoraussetzungen
149,79 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scienti?c methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of eme- ing robotic and intelligent systems' technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-de?ned requirements; and furthermore, there is no consensus on what obj- tive evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communic- ing results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as m- ufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and be- ?ts associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intel- gent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems.
To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scienti?c methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of eme- ing robotic and intelligent systems' technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-de?ned requirements; and furthermore, there is no consensus on what obj- tive evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communic- ing results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as m- ufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and be- ?ts associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intel- gent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems.

Preface 5
Contents 12
Contributors 14
1 Metrics for Multiagent Systems 17
1.1 Introduction 17
1.2 Background on Multiagent Systems and Metrics 18
1.2.1 Anatomy of a Multiagent System 19
1.2.2 Types of Metrics 20
1.2.2.1 Effectiveness vs Performance 21
1.2.2.2 Data Classification 21
1.3 Metrics 22
1.3.1 Agents and Frameworks 22
1.3.2 Platform 23
1.3.2.1 Distributed Systems 23
1.3.2.2 Networking 23
1.3.3 Environment/Host Metrics 24
1.3.4 System 25
1.4 Analysis Framework for Multiagent Systems 26
1.4.1 Selection 27
1.4.2 Collection 28
1.4.3 Application 28
1.5 Case Study: DCOP Algorithms and DCOPolis 28
1.5.1 Experimental Setup 30
1.5.2 Results and Analysis 31
1.6 Summary 31
References 32
2 Evaluation Criteria for Human-Automation PerformanceMetrics 36
2.1 Introduction 36
2.2 Generalizable Metric Classes 37
2.3 Metric Evaluation Criteria 39
2.3.1 Experimental Constraints 40
2.3.2 Comprehensive Understanding 41
2.3.3 Construct Validity 42
2.3.4 Statistical Efficiency 43
2.3.5 Measurement Technique Efficiency 44
2.4 Metric Costs vs. Benefits 44
2.4.1 Example 1: Mental Workload Measures 46
2.4.1.1 Performance Measures 47
2.4.1.2 Subjective Measures 49
2.4.1.3 Physiological Measures 49
2.4.2 Example 2: Attention Allocation Efficiency Measures 50
2.5 Discussion 51
References 53
3 Performance Evaluation Methods for Assistive RoboticTechnology 56
3.1 Introduction 56
3.2 Assistive Robotic Technologies 58
3.2.1 Autism Spectrum Disorders (ASD) 58
3.2.1.1 End-User Evaluations 59
3.2.1.2 Discussion 60
3.2.2 Eldercare 60
3.2.2.1 End-User Evaluations 61
3.2.2.2 Discussion 61
3.2.3 Stroke Rehabilitation 62
3.2.3.1 End-User Evaluations 62
3.2.3.2 Discussion 63
3.2.4 Intelligent Wheelchairs 63
3.2.4.1 End-User Evaluations 64
3.2.4.2 Discussion 64
3.2.5 Assistive Robotic Arms 65
3.2.5.1 End-User Evaluations 65
3.2.5.2 Discussion 66
3.2.6 External Limb Prostheses 66
3.2.6.1 End-User Evaluations 67
3.2.6.2 Discussion 68
3.3 Case Studies 68
3.3.1 Designing Evaluations for an Assistive Robotic Arm 68
3.3.2 Designing Evaluations for Socially Assistive Robots 70
3.4 Incorporating Functional Performance Measures 71
3.5 Conclusions 74
References 76
4 Issues in Applying Bio-Inspiration, Cognitive Critical Mass and Developmental-Inspired Principles to Advanced Intelligent Systems 82
4.1 Introduction 82
4.2 The Increased Relevance of Bio-understanding 84
4.3 Cognitive Critical Mass and Cognitive Decathlon 87
4.3.1 Episodic Memory 90
4.3.2 Theory-of-Mind 91
4.3.3 Self-Awareness 92
4.4 Different Directions and Levels for Bio-inspiration 94
4.5 Developmental Robotics Methods 96
4.6 A Scientific Framework for Developmental Robotics 97
4.7 Developmental Principles Within an Embodied, Interactive Model 101
4.8 Summary 102
References 104
5 Evaluating Situation Awareness of Autonomous Systems 108
5.1 Introduction 108
5.2 Autonomous Agents and Situation Awareness 110
5.3 Criteria for Situation Awareness 111
5.3.1 Awareness of Ignorance 112
5.3.2 Model of Perception Abilities 113
5.3.3 Model of Information Relevance 114
5.3.4 Model of Information Dynamics 116
5.3.4.1 Spatial Information Dynamics 117
5.3.4.2 Temporal Information Dynamics 117
5.3.5 Spatio-Temporal Qualification 118
5.3.6 Information Sharing 119
5.4 Maintaining Situation Awareness 120
5.5 Application Scenario 122
5.5.1 Experimental Setting 122
5.5.2 Results 123
5.6 Discussion 123
5.7 Conclusion 124
References 125
6 From Simulation to Real Robots with Predictable Results: Methods and Examples 127
6.1 Introduction 127
6.1.1 Methodology for Algorithm Development 128
6.1.2 A Brief History of USARSim 129
6.2 Robot Platform Validation 130
6.3 Sensor Validation 135
6.3.1 Laser Range Finder 136
6.3.2 Global Positioning System 138
6.4 Algorithm Development 141
6.4.1 Criticisms and Advantages 143
6.5 Competitions 147
6.6 Conclusion 148
References 149
7 Cognitive Systems Platforms using Open Source 152
7.1 Introduction 152
7.1.1 The Rats Life Benchmark 153
7.1.2 The iCub Platform 153
7.1.3 The Swarm Platform of the Replicatorand SYMBRION Projects 153
7.2 The Rats Life Benchmark: Competing Cognitive Robots 154
7.2.1 Motivation 154
7.2.2 Existing Robot Competitions and Benchmarks 154
7.2.3 Rat's Life Benchmark: Standard Components 155
7.2.3.1 The e-puck mobile robot 155
7.2.3.2 LEGO bricks 156
7.2.3.3 The Webots robot simulation software 156
7.2.4 Rat's Life Benchmark Description 156
7.2.4.1 Software-only Benchmark 156
7.2.4.2 Configuration of the Maze 157
7.2.4.3 Virtual Ecosystem 157
7.2.4.4 Robotics and AI Challenges 158
7.2.5 Evolution of the Competition over Time 158
7.2.6 Discussion 160
7.3 The Open Source Humanoid Robot Platform iCub 161
7.3.1 The iCub 161
7.3.2 Mechanics 163
7.3.3 The Software: YARP 164
7.3.4 Research with the iCub 165
7.4 The iCub Simulator 167
7.4.1 Physics Engine 167
7.4.2 Rendering Engine 168
7.4.3 YARP Protocol for Simulated iCub 168
7.4.4 iCub Body Model 168
7.4.5 Simulator Testing and Further Developments 169
7.5 Symbiotic Robot Organisms: Replicator and SYMBRION Projects 170
7.5.1 Introduction 170
7.5.2 New Paradigm in Collective Robotic Systems 171
7.5.3 Example: Energy Foraging Scenario 173
7.5.4 Hardware and Software Challenges 174
7.5.5 Towards Evolve-Ability and Benchmarking of Robot Organisms 175
7.5.5.1 Bio-inspired/Bio-mimicking Approach 176
7.5.5.2 Engineering-Based Approach 176
7.5.6 Discussion 177
7.6 Other Projects and Future Work 177
References 179
8 Assessing Coordination Demand in Cooperating Robots 182
8.1 Introduction 182
8.1.1 Coordination Demand 183
8.2 Coordination Demand 184
8.2.1 Experimental Plan 185
8.3 Simulation Environment 186
8.3.1 MrCS---The Multirobot Control System 186
8.4 Experiment 1 187
8.4.1 Results 188
8.4.1.1 Human Interactions 189
8.5 Experiment 2 190
8.5.1 Procedure 191
8.5.2 Results 191
8.6 Experiment 3 192
8.6.1 Experimental Design 193
8.6.2 Results 193
8.6.2.1 Overall Performance 194
8.6.2.2 Coordination Effort 195
8.6.2.3 Analyzing Performance 196
8.7 Conclusions 196
References 198
9 Measurements to Support Performance Evaluation of Wireless Communications in Tunnels for Urban Search and Rescue Robots 200
9.1 Performance Requirements for Urban Searchand Rescue Robot Communications 200
9.2 Performance Evaluation Procedures 204
9.3 Measurement of Signal Impairments in a Tunnel Environment 207
9.3.1 The Test Environment 209
9.3.2 Measurements 210
9.3.2.1 Narrowband Received Power 210
9.3.2.2 Excess Path Loss and RMS Delay Spread 215
9.3.2.3 Tests of Robot Communications 219
9.4 Modeled Results 221
9.4.1 Single-Frequency Path Gain Models 221
9.4.2 Channel Capacity Model 224
9.5 Evaluating the Performance of a Robot in a Representative Tunnel Environment 227
9.6 Conclusion 230
References 231
10 Quantitative Assessment of Robot-Generated Maps 233
10.1 Introduction 233
10.2 Developing Test Scenarios for Robotic Mapping 236
10.2.1 Performance Singularity Identification and Testing 237
10.2.1.1 The Maze: Scenarios with Distinct Features 238
10.2.1.2 The Tube Maze: Scenarios with Occluded Features 239
10.2.1.3 The Tunnel: Scenarios with Minimal Features 240
10.3 Assessing Objective Performance Using Theoretical Analysis 240
10.3.1 The Case for Statistical Bounds 242
10.3.2 The CRB for Range-Finder Localization 243
10.3.3 The CRB for One-Shot Pose Tracking 243
10.3.4 The CRB for Pose Tracking Over a Trajectory 244
10.3.5 The CRB for Mapping and SLAM 245
10.4 Evaluating Local Metric Consistency of Robot-Generated Maps Using Force Field Simulation and Virtual Scans 246
10.4.1 Scan Alignment using Force Field Simulation 248
10.4.2 Augmenting Data Using Virtual Scans 248
10.4.3 Map Evaluation Using Virtual Scans 250
10.5 Evaluating Global Metric Consistency of Robot-Generated Maps 251
10.5.1 Harris-Based Algorithm 252
10.5.1.1 Closest Point Matching 252
10.5.1.2 Vectorial Space 253
10.5.2 Hough-Based Algorithm 253
10.5.3 Scale Invariant Feature Transform 254
10.5.4 Quality Measure 254
10.5.5 Limitations 257
10.6 Conclusion 257
Reference 258
11 Mobile Robotic Surveying Performance for Planetary Surface Site Characterization 261
11.1 Introduction 261
11.2 Local Sensor-Based Surveying 262
11.3 Remote Sensor-Based Surveying 264
11.3.1 Single-Site Remote Sensing Surveys 264
11.3.2 Single-Site Remote Survey Performance 267
11.3.3 Multiple-Site Remote Sensing Surveys 268
11.3.4 Multi-Site Remote Survey Performance 269
11.4 Characteristic Performance of Mobile Surveys 270
11.5 Enriching Metrics for Surveys on Planetary Surfaces 271
11.5.1 Consolidated Metric for Human-Supervised Robotic Prospecting 273
11.5.2 Metrics for Real-Time Assessment of Robot Performance 276
11.5.2.1 Considerations for Real-Time Robot Performance Assessment 277
11.6 Summary and Conclusions 278
References 279
12 Performance Evaluation and Metrics for Perceptionin Intelligent Manufacturing 281
12.1 Introduction 281
12.2 Preliminary Analysis of Conveyor Dynamic Motion 283
12.2.1 Conveyor Motion Data Collection Method 284
12.2.2 Raw Motion Data Collected 285
12.2.3 Statistical Analysis of Linear Accelerations 287
12.2.4 Fast Fourier Transformation Analysis 287
12.2.5 Computed Speed and Position Data 289
12.2.6 Conclusion and Summary 291
12.3 Calibration of a System of a Gray Value Camera and MDSI Range Camera 291
12.3.1 Cross-Calibration Procedure 292
12.3.2 Experimental Results 294
12.3.3 Conclusion 296
12.4 Performance Evaluation of Laser Trackers 296
12.4.1 Introduction 296
12.4.2 The ASME B89.4.19 Standard 297
12.4.3 Large-Scale Metrology at NIST 297
12.4.4 Tracker Calibration Examples 298
12.4.5 Sensitivity Analysis 300
12.4.6 Summary 301
12.4.7 Conclusions 302
12.5 Performance of Super-Resolution Enhancement for LIDAR Camera Data 302
12.5.1 Methodology 304
12.5.1.1 Preprocessing Stage 304
12.5.1.2 Triangle Orientation Discrimination (TOD) Methodology 306
12.5.2 LIDAR Camera 307
12.5.3 Data Collection 307
12.5.4 Data Processing 307
12.5.5 Perception Experiment 307
12.5.6 Results and Discussion 308
12.5.6.1 Assessment of Registration Accuracy 308
12.5.6.2 Triangle Orientation Discrimination Perception Experiment 309
12.5.7 Conclusion 311
12.6 Dynamic 6DOF Metrology for Evaluating a Visual Servoing Algorithm 311
12.6.1 Purdue Line Tracking System 312
12.6.2 Experimental Set-up and Results 313
12.6.2.1 Stationary Tests 314
12.6.2.2 Linear Motion Tests 315
12.6.2.3 Shaking Motion Tests 317
12.6.3 Conclusions 318
12.7 Summary 319
References 320
13 Quantification of Line Tracking Solutions for Automotive Applications 323
13.1 Introduction 323
13.2 Quantifying Line Tracking Solutions 324
13.3 Experimental Setup and Performance Data Collection Method 325
13.4 Quantification Test Cases 326
13.5 Quantification Results of the Encoder Based Line Tracking Solution 327
13.5.1 Rail Tracking Results 327
13.5.2 Arm Tracking Results 332
13.6 Quantification Results of the Encoder Plus Static Vision Line Tracking Solution 332
13.7 Quantification Results of the Analog Laser Based Line Tracking Solution 337
13.7.1 Analog Sensor Based Rail Tracking Results 339
13.7.2 Analog Sensor Based Arm Tracking Results 343
13.8 Conclusion and Automotive Assembly Applications 347
References 350

Erscheint lt. Verlag 18.9.2009
Zusatzinfo XIX, 338 p.
Verlagsort New York
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Mathematik / Informatik Mathematik
Technik Elektrotechnik / Energietechnik
Schlagworte Automation • Benchmarks • Communication • currentjm • Intelligent Systems • linear optimization • Measurement • perception • Performance • Performance Evaluation • Performance Metrics • quantitative performance evaluation • robot • Robotics • Simulation • standardized test methods • Tracking
ISBN-10 1-4419-0492-1 / 1441904921
ISBN-13 978-1-4419-0492-8 / 9781441904928
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 11,9 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Praxis-Guide für Künstliche Intelligenz in Unternehmen - Chancen …

von Thomas R. Köhler; Julia Finkeissen

eBook Download (2024)
Campus Verlag
38,99
Wie du KI richtig nutzt - schreiben, recherchieren, Bilder erstellen, …

von Rainer Hattenhauer

eBook Download (2023)
Rheinwerk Computing (Verlag)
24,90