Explainable and Interpretable Models in Computer Vision and Machine Learning (eBook)

eBook Download: PDF
2018 | 2018
XVII, 299 Seiten
Springer International Publishing (Verlag)
978-3-319-98131-4 (ISBN)

Lese- und Medienproben

Explainable and Interpretable Models in Computer Vision and Machine Learning -
Systemvoraussetzungen
149,79 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

This book compiles leading research on the development of explainable and interpretable machine learning methods in the context of computer vision and machine learning.

Research progress in computer vision and pattern recognition has led to a variety of modeling techniques with almost human-like performance. Although these models have obtained astounding results, they are limited in their explainability and interpretability: what is the rationale behind the decision made? what in the model structure explains its functioning? Hence, while good performance is a critical required characteristic for learning machines, explainability and interpretability capabilities are needed to take learning machines to the next step to include them in decision support systems involving human supervision.   

 This book, written by leading international researchers, addresses key topics of explainability and interpretability, including the following:

 

·         Evaluation and Generalization in Interpretable Machine Learning

·         Explanation Methods in Deep Learning

·         Learning Functional Causal Models with Generative Neural Networks

·         Learning Interpreatable Rules for Multi-Label Classification

·         Structuring Neural Networks for More Explainable Predictions

·         Generating Post Hoc Rationales of Deep Visual Classification Decisions

·         Ensembling Visual Explanations

·         Explainable Deep Driving by Visualizing Causal Attention

·         Interdisciplinary Perspective on Algorithmic Job Candidate Search

·         Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions

·         Inherent Explainability Pattern Theory-based Video Event Interpretations


Foreword 6
Preface 8
Acknowledgements 11
Contents 12
Contributors 14
Part I Notions and Concepts on Explainability and Interpretability 17
Considerations for Evaluation and Generalization in Interpretable Machine Learning 18
1 Introduction 18
2 Defining Interpretability 20
3 Defining the Interpretability Need 21
4 Evaluation 23
5 Considerations for Generalization 26
6 Conclusion: Recommendations for Researchers 28
References 29
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges 33
1 Introduction 33
1.1 The Components of Explainability 34
1.2 Users and Laws 34
1.3 Explanation and DNNs 35
2 Users and Their Concerns 36
2.1 Case Study: Autonomous Driving 37
3 Laws and Regulations 38
4 Explanation 38
5 Explanation Methods 39
5.1 Desirable Properties of Explainers 40
5.2 A Taxonomy for Explanation Methods 40
5.2.1 Rule-Extraction Methods 41
5.2.2 Attribution Methods 42
5.2.3 Intrinsic Methods 43
6 Addressing General Concerns 44
7 Discussion 46
References 47
Part II Explainability and Interpretability in Machine Learning 51
Learning Functional Causal Models with Generative NeuralNetworks 52
1 Introduction 53
2 Problem Setting 55
2.1 Notations 55
2.2 Assumptions and Properties 56
3 State of the Art 57
3.1 Learning the CPDAG 57
3.1.1 Constraint-Based Methods 58
3.1.2 Score-Based Methods 58
3.1.3 Hybrid Algorithms 59
3.2 Exploiting Asymmetry Between Cause and Effect 59
3.2.1 The Intuition 60
3.2.2 Restriction on the Class of Causal Mechanisms Considered 60
3.2.3 Pairwise Methods 61
3.3 Discussion 62
4 Causal Generative Neural Networks 64
4.1 Modeling Continuous FCMs with Generative Neural Networks 64
4.1.1 Generative Model and Interventions 65
4.2 Model Evaluation 66
4.2.1 Scoring Metric 66
4.2.2 Representational Power of CGNN 67
4.3 Model Optimization 68
4.3.1 Parametric (Weight) Optimization 69
4.3.2 Non-parametric (Structure) Optimization 69
4.3.3 Identifiability of CGNN up to Markov Equivalence Classes 70
5 Experiments 71
5.1 Experimental Setting 71
5.2 Learning Bivariate Causal Structures 72
5.2.1 Benchmarks 73
5.2.2 Baseline Approaches 73
5.2.3 Hyper-Parameter Selection 74
5.2.4 Empirical Results 74
5.3 Identifying v-structures 75
5.4 Multivariate Causal Modeling Under Causal Sufficiency Assumption 76
5.4.1 Results on Artificial Graphs with Additive and Multiplicative Noises 76
5.4.2 Result on Biological Data 78
5.4.3 Results on Biological Real-World Data 79
6 Towards Predicting Confounding Effects 80
6.1 Principle 81
6.2 Experimental Validation 83
6.2.1 Benchmarks 83
6.2.2 Baselines 83
6.2.3 Results 84
7 Discussion and Perspectives 84
Appendix 85
The Maximum Mean Discrepancy (MMD) Statistic 85
Proofs 86
Table of Scores for the Experiments on Cause-Effect Pairs 89
Table of Scores for the Experiments on Graphs 89
References 90
Learning Interpretable Rules for Multi-Label Classification 94
1 Introduction 94
2 Multi-Label Classification 95
2.1 Problem Definition 96
2.2 Dependencies in Multi-Label Classification 97
2.3 Evaluation of Multi-Label Predictions 98
2.3.1 Bipartition Evaluation Functions 98
2.3.2 Multi-Label Evaluation Functions 99
2.3.3 Aggregation and Averaging 99
3 Multi-Label Rule Learning 100
3.1 Rule Learning 100
3.1.1 Predictive Rule Learning 100
3.1.2 Descriptive Rule Learning 101
3.2 Multi-Label Rules 101
3.3 Challenges for Multi-Label Rule Learning 103
4 Discovery of Multi-Label Rules 105
4.1 Association Rule-Based Algorithms 105
4.2 Choosing Loss-Minimizing Rule Heads 106
4.2.1 Anti-Monotonicity and Decomposability 107
4.2.2 Efficient Generation of Multi-Label Heads 107
5 Learning Predictive Rule-Based Multi-Label Models 109
5.1 Layered Multi-Label Learning 110
5.1.1 Stacked Binary Relevance 110
5.2 Multi-Label Separate-and-Conquer 112
5.2.1 A Multi-Label Covering Algorithm 113
6 Case Studies 114
6.1 Case Study 1: Single-Label Head Rules 115
6.1.1 Exemplary Rule Models 116
6.1.2 Visualization of Dependencies 118
6.1.3 Discussion 119
6.2 Case Study 2: Multi-Label Heads 120
6.2.1 Exemplary Rule Models 120
6.2.2 Predictive Performance 121
6.2.3 Computational Cost 121
7 Conclusion 122
References 123
Structuring Neural Networks for More Explainable Predictions 127
1 Introduction 128
2 Explanation Techniques 128
2.1 Sensitivity Analysis 129
2.2 Deep Taylor Decomposition 130
2.3 Theoretical Limitations 131
3 Convolutional Neural Networks 131
3.1 Experiments 133
4 Recurrent Neural Networks 136
4.1 Experiments 137
5 Conclusion 141
References 141
Part III Explainability and Interpretability in Computer Vision 144
Generating Post-Hoc Rationales of Deep Visual ClassificationDecisions 145
1 Introduction 145
2 Related Work 148
3 Generating Visual Explanations (GVE) Model 150
3.1 Relevance Loss 150
3.2 Discriminative Loss 151
4 Experimental Setup 153
5 Results 155
5.1 Quantitative Results 155
5.2 Qualitative Results 157
6 Conclusion 161
References 162
Ensembling Visual Explanations 165
1 Introduction 165
2 Background and Related Work 167
3 Algorithms for Ensembling Visual Explanations 170
3.1 Weighted Average Ensemble Explanation 171
3.2 Penalized Weighted Average Ensemble Explanation 172
3.3 Agreement with N Systems 173
4 Evaluating Explanations 174
4.1 Comparison Metric 174
4.2 Uncovering Metric 174
4.3 Crowd-Sourced Hyper-Parameter Tuning 176
5 Experimental Results and Discussion 177
6 Conclusions and Future Directions 180
References 181
Explainable Deep Driving by Visualizing Causal Attention 183
1 Introduction 184
2 Related Work 186
2.1 End-to-End Learning for Self-Driving Cars 186
2.2 Visual Explanations 187
3 Attention-Based Explainable Deep Driving Model 187
3.1 Preprocessing 187
3.2 Encoder: Convolutional Feature Extraction 188
3.3 Coarse-Grained Decoder: Visual (Spatial) Attention 189
3.4 Fine-Grained Decoder: Causality Test 191
4 Result 194
4.1 Datasets 194
4.2 Training and Evaluation Details 194
4.3 Effect of Choosing Penalty Coefficient ? 195
4.4 Effect of Varying Smoothing Factors 195
4.5 Quantitative Analysis 197
4.6 Effect of Causal Visual Saliencies 197
5 Discussion 198
6 Conclusion 200
References 201
Part IV Explainability and Interpretability in First Impressions Analysis 204
Psychology Meets Machine Learning: Interdisciplinary Perspectives on Algorithmic Job Candidate Screening 205
1 Introduction: Algorithmic Opportunities for Job Candidate Screening 206
1.1 The Need for Explainability 206
1.2 Purpose and Outline of the Chapter 208
2 Common Methodological Focus Areas 209
2.1 Psychology 209
2.1.1 Psychometrics 209
2.1.2 Reliability 210
2.1.3 Validity 211
2.1.4 Experimentation and the Nomological Network 213
2.2 Computer Science and Machine Learning 214
2.2.1 The Abstract Machine Learning Perspective 215
2.2.2 Machine Learning in Applied Domains 217
2.3 Contrasting Focus Areas in Psychology and Machine Learning 219
2.4 Conclusion 224
3 The Personnel Selection Problem 224
3.1 How to Identify Which KSAOs Are Needed? 225
3.2 How to Measure KSAOs? 226
3.3 Dealing with Judgment 228
3.4 What Is Job Performance? 230
3.5 Conclusion 231
4 Use Case: An Explainable Solution for Multimodal Job Candidate Screening 231
4.1 The Chalearn Looking at People Job Candidate Screening Challenge 231
4.2 Dataset 233
4.3 General Framework of a Potential Explainable Solution 235
4.3.1 Chosen Features 236
4.3.2 Regression Model 239
4.3.3 Quantitative Performance 240
4.4 Opportunities for Explanation 241
4.5 Reflection 242
5 Acceptability 244
5.1 Applicants 245
5.2 Hiring Managers 248
6 Recommendations 250
6.1 Better Understanding of Methodology and Evaluation 251
6.1.1 Stronger Focus on Criterion Validity 251
6.1.2 Combining Methodological Focus Points 251
6.2 Philosophical and Ethical Awareness 253
6.3 Explicit Decision Support 253
6.4 The Goal of Explanation 254
6.5 Conclusion 255
References 255
Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions 262
1 Introduction 262
2 Related Work 264
3 Job Candidate Screening Challenge 265
4 Proposed Method 266
4.1 Visual Feature Extraction 267
4.2 Acoustic Features 268
4.3 Classification 269
5 Experimental Results 270
5.1 Experimental Results Using Regression 271
5.2 Experimental Results Using Classification 273
6 Explainability Analysis 274
6.1 The Effect of Ethnicity, Age, and Sex 277
7 Discussion and Conclusions 279
References 280
On the Inherent Explainability of Pattern Theory-Based Video Event Interpretations 283
1 Introduction 284
2 Explainable Model for Video Interpretation 287
2.1 Symbolic Representation of Concepts 287
2.2 Constructing Contextualization Cues 288
2.3 Expressing Semantic Relationships 288
2.3.1 Bond Compatibility 289
2.3.2 Types 289
2.3.3 Quantification 290
2.4 Constructing Interpretations 290
2.4.1 Probability 291
2.4.2 Inherent Explainability 292
2.5 Inference 292
3 Generating Explanations 293
3.1 Understanding the Overall Interpretation (Q1) 294
3.2 Understanding Provenance of Concepts (Q3) 296
3.3 Handling What-Ifs 299
3.3.1 Alternatives to Grounded Concept Generators 299
3.3.2 Alternative Activity Interpretations 301
3.3.3 Why Not a Given Interpretation? 302
4 Conclusion and Future Work 303
References 304

Erscheint lt. Verlag 29.11.2018
Reihe/Serie The Springer Series on Challenges in Machine Learning
The Springer Series on Challenges in Machine Learning
Zusatzinfo XVII, 299 p. 73 illus., 58 illus. in color.
Verlagsort Cham
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Grafik / Design
Schlagworte Benchmarking of explainable and interpretable models • Chalearn looking at people challenges • Explainable and interpretable decision support systems • Explainable learning machines • Explainable models in computer vision • Explaining first impressions • Explaining human behavior from data • Explaining Looking at people • Interpretable models • Interpreting human behavior analysis models • Job candidate screening • Multimodal analysis of human behavior
ISBN-10 3-319-98131-5 / 3319981315
ISBN-13 978-3-319-98131-4 / 9783319981314
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 9,3 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Schritt für Schritt zu Vektorkunst, Illustration und Screendesign

von Anke Goldbach

eBook Download (2023)
Rheinwerk Design (Verlag)
27,93
Das umfassende Handbuch

von Michael Moltenbrey

eBook Download (2024)
Rheinwerk Fotografie (Verlag)
27,93
Das umfassende Handbuch

von Christian Denzler

eBook Download (2023)
Rheinwerk Design (Verlag)
31,43