Machine Learning -  Yves Kodratoff,  Ryszard S. Michalski

Machine Learning (eBook)

An Artificial Intelligence Approach, Volume III
eBook Download: PDF
2014 | 1. Auflage
825 Seiten
Elsevier Science (Verlag)
978-0-08-051055-2 (ISBN)
Systemvoraussetzungen
54,95 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Machine Learning: An Artificial Intelligence Approach, Volume III presents a sample of machine learning research representative of the period between 1986 and 1989. The book is organized into six parts. Part One introduces some general issues in the field of machine learning. Part Two presents some new developments in the area of empirical learning methods, such as flexible learning concepts, the Protos learning apprentice system, and the WITT system, which implements a form of conceptual clustering. Part Three gives an account of various analytical learning methods and how analytic learning can be applied to various specific problems. Part Four describes efforts to integrate different learning strategies. These include the UNIMEM system, which empirically discovers similarities among examples; and the DISCIPLE multistrategy system, which is capable of learning with imperfect background knowledge. Part Five provides an overview of research in the area of subsymbolic learning methods. Part Six presents two types of formal approaches to machine learning. The first is an improvement over Mitchell's version space method; the second technique deals with the learning problem faced by a robot in an unfamiliar, deterministic, finite-state environment.
Machine Learning: An Artificial Intelligence Approach, Volume III presents a sample of machine learning research representative of the period between 1986 and 1989. The book is organized into six parts. Part One introduces some general issues in the field of machine learning. Part Two presents some new developments in the area of empirical learning methods, such as flexible learning concepts, the Protos learning apprentice system, and the WITT system, which implements a form of conceptual clustering. Part Three gives an account of various analytical learning methods and how analytic learning can be applied to various specific problems. Part Four describes efforts to integrate different learning strategies. These include the UNIMEM system, which empirically discovers similarities among examples; and the DISCIPLE multistrategy system, which is capable of learning with imperfect background knowledge. Part Five provides an overview of research in the area of subsymbolic learning methods. Part Six presents two types of formal approaches to machine learning. The first is an improvement over Mitchell's version space method; the second technique deals with the learning problem faced by a robot in an unfamiliar, deterministic, finite-state environment.

Front Cover 1
Machine Learning: An Artificial Intelligence Approach 4
Copyright Page 5
Table of Contents 6
PREFACE 10
PART ONE: GENERAL ISSUES 12
CHAPTER 1. RESEARCH IN MACHINE LEARNING: Recent Progress, Classification of Methods, and Future Directions 14
Abstract 14
1.1 INTRODUCTION 15
1.2 RECENT DEVELOPMENTS 16
1.3 SYNTHETIC VERSUS ANALYTIC LEARNING 22
1.4 A MULTICRITERIA CLASSIFICATION OF LEARNING PROCESSES 26
1.5 A BRIEF REVIEW OF THE CHAPTERS 31
1.5 FRONTIER PROBLEMS 36
ACKNOWLEDGMENTS 38
References 38
CHAPTER 2. EXPLANATIONS, MACHINE LEARNING, AND CREATIVITY 42
Abstract 42
2.1 INTRODUCTION„WHAT IS INTELLIGENCE? 42
2.2 EXPLANATION AND THE UNDERSTANDING PROCESS 43
2.3 EXPLANATION GOALS 44
2.4 EXPLANATION IN ACTION 45
2.5 INDEXING MEMORY: A KEY ISSUE 47
2.6 CREATIVITY: AN ALGORITHMIC PROCESS 49
2.7 EXPLAINING SWALE'S DEATH 51
2.8 CREATIVE PROBLEM SOLVING 55
2.9 CONCLUSION 57
References 58
COMMENTARY 60
References 70
PART TWO: EMPIRICAL LEARNING METHODS 72
CHAPTER 3. LEARNING FLEXIBLE CONCEPTS: Fundamental Ideas and a Method Based on Two-Tiered Representation 74
Abstract 74
3.1 INTRODUCTION 75
3.2 TWO-TIERED CONCEPT REPRESENTATION 79
3.3 EXAMPLES ILLUSTRATING TWO-TIERED REPRESENTATION 82
3.4 TRADING BCR FOR ICI 84
3.5 LEARNING TWO-TIERED REPRESENTATIONS 87
3.6 RELATING INSTANCES TO CONCEPTS: FLEXIBLE MATCHING 93
3.7 EXPERIMENTS WITH AQTT-15 96
3.8 A COMPARISON WITH THE ASSISTANT PROGRAM 100
3.9 CONCLUSION AND TOPICS FOR FUTURE RESEARCH 105
ACKNOWLEDGMENTS 107
References 108
COMMENTARY 114
Abstract 114
1 OVERVIEW OF THE PAPER 114
1.1 Sample Learning Program: AQ15 116
2 ANOTHER VIEW OF MULTITIER LEARNING 117
References 122
CHAPTER 4. PROTOS: AN EXEMPLAR-BASED LEARNING APPRENTICE 123
Abstract 123
4.1 INTRODUCTION 124
4.2 ISSUES IN EXEMPLAR-BASED SYSTEMS AND THEIR SOLUTIONS IN PROTOS 125
4.3 AN EXAMPLE OF CLASSIFYING AND LEARNING 131
4.4 EXPERIMENTAL EVALUATION OF PROTOS 134
4.5 SUMMARY 136
ACKNOWLEDGMENTS 136
References 137
COMMENTARY 139
1 KNOWLEDGE COMPILATION 139
2 KNOWLEDGE-BASED PATTERN MATCHING 143
3 THE LANGUAGE FOR EXPRESSING JUSTIFICATION 147
References 149
CHAPTER 5. PROBABILISTIC DECISION TREES 151
Abstract 151
5.1 INTRODUCTION 151
5.2 GROWING DECISION TREES 152
5.3 IMPERFECT LEAVES 158
5.4 UNKNOWN AND IMPRECISE ATTRIBUTE VALUES 158
5.5 SOFT THRESHOLDS 160
5.6 CONCLUSION 162
ACKNOWLEDGMENTS 162
References 163
CHAPTER 6. INTEGRATING QUANTITATIVE AND QUALITATIVE DISCOVERY IN THE ABACUS SYSTEM 164
Abstract 164
6.1 INTRODUCTION 164
6.2 GOALS FOR QUANTITATIVE DISCOVERY 166
6.3 RELATED WORK 167
6.4 THE ABACUS APPROACH TO QUANTITATIVE DISCOVERY 169
6.5 DISCOVERING EQUATIONS 170
6.6 FORMULATION OF QUALITATIVE PRECONDITIONS 184
6.7 EXPERIMENTS 186
6.8 DISCUSSION OF METHODOLOGY 195
6.9 SUMMARY 197
ACKNOWLEDGMENTS 198
References 198
CHAPTER 7. LEARNING BY EXPERIMENTATION: THE OPERATOR REFINEMENT METHOD 202
Abstract 202
7.1 INTRODUCTION: THE NEED FOR REACTIVE EXPERIMENTATION 202
7.2 THE ROLE OF EXPERIMENTATION IN PRODIGY 205
7.3 RELATED WORK 215
7.4 DISCUSSION AND FURTHER WORK 216
ACKNOWLEDGMENTS 218
APPENDIX I THE PRODIGY ARCHITECTURE 219
References 221
CHAPTER 8. LEARNING FAULTDIAGNOSIS HEURISTICS FROM DEVICE DESCRIPTIONS 225
Abstract 225
8.1 INTRODUCTION 225
8.2 PREVIOUS WORK 227
8.3 FAILURE-DRIVEN LEARNING OF FAULT DIAGNOSIS HEURISTICS 229
8.4 FAILURE-DRIVEN LEARNING OF DIAGNOSIS HEURISTICS 234
8.5 CONCLUSION 243
ACKNOWLEDGMENTS 243
References 244
CHAPTER 9. CONCEPTUAL CLUSTERING AND CATEGORIZATION: Bridging the Gap between Induction and Causal Models 246
Abstract 246
9.1 INTRODUCTION 247
9.2 LEARNING BY OBSERVATION (INDUCTION) 248
9.3 CONCEPTUAL CLUSTERING VERSUS STATISTICAL CLUSTERING 250
9.4 WHAT IS NORMATIVE? 252
9.5 CONSTRAINTS ON CATEGORIES 254
9.6 INTERCORRELATION HYPOTHESIS AND BASIC LEVELS 255
9.7 SIMILARITY MATTERS 256
9.8 BUT ARE FEATURES INDEPENDENT? 257
9.9 CONNECTIONIST MODELS AND INTERCORRELATION OF FEATURES 259
9.10 WHAT'S WRONG WITH STATISTICS? 260
9.11 COMPREHENSIBILITY MATTERS 261
9.12 AN ALGORITHM USING FEATURE INTERCORRELATIONS: WITT 263
9.13 WITT STRUCTURE 263
9.14 SOME RESULTS: WITT STUDIES 266
9.15 WHERE DO SCRIPTS COME FROM? 268
9.16 SUMMARY: FEATURE INTERCORRELATIONS, CATEGORIZATION, AND SEMANTICS 274
ACKNOWLEDGMENTS 276
References 276
PART THREE: ANALYTICAL LEARNING METHODS 280
CHAPTER 10. LEAP: A LEARNING APPRENTICE FOR VLSI DESIGN 282
Abstract 282
10.1 LEARNING APPRENTICE SYSTEMS 282
10.2 LEAP: A LEARNING APPRENTICE FOR VLSI DESIGN 284
10.3 DISCUSSION 295
10.4 CONCLUSION 298
ACKNOWLEDGMENTS 299
References 299
COMMENTARY 301
1 MAIN FEATURES OF LEAP 301
2 ORGANIZATION OF THE KNOWLEDGE BASE 301
3 KNOWLEDGE-ACQUISITION BOTTLENECK 302
4 CAN SPECIFICATIONS BE MODIFIED? 302
5 HOW FAR SHOULD WE GENERALIZE? 303
6 LEARNING CONTROL KNOWLEDGE 304
7 REVISION OF CONTROL KNOWLEDGE 304
8 PROBLEMS OF COMMUNICATION 305
9 DEALING WITH MULTIPLE EXPLANATIONS 306
10 DEALING WITH IMPERFECT THEORIES 306
11 TRADE-OFFS BETWEEN SIMPLICITY AND COMPLETENESS 307
12 THEORY OF GEOLOGY IS INCOMPLETE 308
13 DEEP AND SHALLOW THEORIES 308
14 RESPONSE TIME AND OFF-LINE PROCESSING 309
15 BACKTRACKING 310
16 WHAT IS AN EXPLANATION? 310
17 RELATION OF EBL TO ANALOGY 310
18 GENERALIZATION AND PROPAGATION OF CONSTRAINTS 311
19 RELATIONSHIP OF EXAMPLES AND WEAK THEORIES 311
20 EXTENDING EBL TO DEAL WITH UNCERTAIN REASONING 312
ACKNOWLEDGMENTS 312
CHAPTER 11. ACQUIRING GENERAL ITERATIVE CONCEPTS BY REFORMULATING EXPLANATIONS OF OBSERVED EXAMPLES 313
Abstract 313
11.1 INTRODUCTION 314
11.2 THE BAGGER SYSTEM 317
11.3 GENERALIZATION IN BAGGER 322
11.4 DETAILS OF THE STACKING EXAMPLE 333
11.5 EMPIRICAL ANALYSIS 338
11.6 RELATED WORK 347
11.7 SOME OPEN RESEARCH ISSUES 349
11.8 CONCLUSION 352
ACKNOWLEDGMENTS 354
APPENDIX: THE INITIAL INFERENCE RULES 355
References 356
CHAPTER 12. DISCOVERING ALGORITHMS FROM WEAK METHODS 362
Abstract 362
12.1 INTRODUCTION AND MOTIVATION 362
12.3 DISCOVERING ITERATIVE MACRO-OPERATORS: THE METHOD 366
12.4 OPEN PROBLEMS 369
12.5 SUMMARY 369
ACKNOWLEDGMENTS 369
References 369
CHAPTER 13. OGUST: A SYSTEM THAT LEARNS USING DOMAIN PROPERTIES EXPRESSED AS THEOREMS 371
Abstract 371
13.1 INTRODUCTION 371
13.2 OGUST 375
13.3 THE ELIMINATION OF DISCRIMINATING OCCURENCES 379
13.4 CONCLUSION 389
References 392
CHAPTER 14. CONDITIONAL OPERATIONALITY AND EXPLANATION-BASED GENERALIZATION 394
Abstract 394
14.1 INTRODUCTION 394
14.2 EBG 395
14.3 REASONING ABOUT OPERATIONALITY 397
14.4 GENERALIZING OPERATIONALITY 398
14.5 ROE 399
14.6 GENERALITY OF RESULTS 400
14.7 REMOVING CONDITIONAL OPERATIONALLY 401
14.8 LIMITATIONS 402
14.9 RELATED WORK 403
14.10 CONCLUSION 403
ACKNOWLEDGMENTS 404
APPENDIX I PROLOG IMPLEMENTATION OF ROE 404
References 405
PART FOUR: INTEGRATED LEARNING SYSTEMS 408
CHAPTER 15. THE UTILITY OF SIMILARITY-BASED LEARNING IN A WORLD NEEDING EXPLANATION 410
Abstract 410
15.1 INTRODUCTION 411
15.2 EXPLANATION AND SIMILARITY: TWO APPROACHES TO LEARNING 412
15.3 A SIMILARITY-BASED LEARNING PROGRAM: UNIMEM 414
15.4 THE KEY QUESTION 421
15.5 ELEMENTS OF A SOLUTION 422
15.6 CONCLUSION 430
ACKNOWLEDGMENTS 430
References 430
COMMENTARY: SOME ISSUES IN CONCEPT LEARNING 434
1 INTRODUCTION 434
2 THE IMPORTANCE OF SIMILARITY 434
3 THE PROBLEM OF CONCEPT LEARNING 436
4 DEFINITIONS AND CONSEQUENCES OF SIMILARITY 437
5 WHERE IS THE KNOWLEDGE? 438
6 AFTER THE SBL 441
7 SUMMARY 442
References 442
CHAPTER 16. LEARNING EXPERT KNOWLEDGE BY IMPROVING THE EXPLANATIONS PROVIDED BY THE SYSTEM 444
Abstract 444
16.1 INTRODUCTION 444
16.2 NOTATIONS 448
16.3 IMPROVING A GENERALIZATION 449
16.4 IMPROVING THE EXPLANATIONS 467
16.5 CONCLUSIONS 474
References 475
COMMENTARY 477
1 OVERVIEW OF THE PAPER 477
2 THE SBL VIEW 478
3 AN ILLUSTRATION AND BRIEF REVIEW OF EXPLANATION REPAIR 479
4 SBL AS EBL EXPLANATION REPAIR 482
5 STRENGTHS 482
6 POTENTIAL WEAKNESSES AND POSSIBLE EXTENSIONS 483
References 483
CHAPTER 17. GUIDING INDUCTION WITH DOMAIN THEORIES 485
Abstract 485
17.1 INTRODUCTION 485
17.2 A FRAMEWORK FOR INDUCING CONCEPT DESCRIPTIONS 487
17.3 USING DEDUCTION TO DRIVE INDUCTION 493
17.4 AN EXAMPLE 497
17.5 CONCLUSIONS 500
References 501
CHAPTER 18. KNOWLEDGE BASE REFINEMENT AS IMPROVING AN INCORRECT AND INCOMPLETE DOMAIN THEORY 504
Abstract 504
18.1 INTRODUCTION 504
18.2 MINERVA CLASSIFICATION AND DESIGN SHELL 505
18.3 ODYSSEUS'S METHOD FOR EXTENDING AN INCOMPLETE DOMAIN THEORY 508
18.4 ODYSSEUS'S METHOD FOR IMPROVING AN INCORRECT DOMAIN THEORY 514
18.5 ODYSSEUS'S METHOD FOR IMPROVING AN INCONSISTENT DOMAIN THEORY 516
18.6 RELATED RESEARCH 516
18.7 EXPERIMENTAL RESULTS 517
18.8 CONCLUSIONS 521
ACKNOWLEDGMENTS 522
References 522
CHAPTER 19. APPRENTICESHIP LEARNING IN IMPERFECT DOMAIN THEORIES 525
Abstract 525
19.1 INTRODUCTION 526
19.2 DISCIPLE AS AN EXPERT SYSTEM 527
19.3 THE LEARNING PROBLEM 530
19.4 LEARNING IN A COMPLETE THEORY DOMAIN 532
19.5 LEARNING IN A WEAK THEORY DOMAIN 537
19.6 LEARNING IN AN INCOMPLETE THEORY DOMAIN 548
19.7 EXPERIMENTS WITH DISCIPLE 555
19.8 CONCLUSIONS 556
ACKNOWLEDGMENTS 559
References 559
PART FIVE: SUBSYMBOLIC AND HETEROGENOUS LEARNING SYSTEMS 564
CHAPTER 20. CONNECTIONIST LEARNING PROCEDURES 566
Abstract 566
20.1 INTRODUCTION 566
20.2 CONNECTIONIST MODELS 567
20.3 CONNECTIONIST RESEARCH ISSUES 568
20.4 ASSOCIATIVE MEMORIES WITHOUT HIDDEN UNITS 571
20.5 SIMPLE SUPERVISED LEARNING PROCEDURES 575
20.5 SIMPLE SUPERVISED LEARNING PROCEDURES 575
20.5 SIMPLE SUPERVISED LEARNING PROCEDURES 575
20.6 BACK-PROPAGATION: A MULTILAYER LEAST SQUARES PROCEDURE 580
20.7 BOLTZMANN MACHINES 592
20.8 MAXIMIZING MUTUAL INFORMATION: A SEMISUPERVISED LEARNING PROCEDURE 597
20.9 UNSUPERVISED HEBBIAN LEARNING 597
20.10 COMPETITIVE LEARNING 600
20.11 REINFORCEMENT LEARNING PROCEDURES 603
20.12 DISCUSSION 608
ACKNOWLEDGMENTS 613
References 614
CHAPTER 21. GENETIC-ALGORITHM-BASED LEARNING 622
Abstract 622
21.1 INTRODUCTION 622
21.1 AN ADAPTIVE SYSTEMS PERSPECTIVE 623
21.3 A BRIEF OVERVIEW OF GENETIC ALGORITHMS 625
21.4 USING GAs FOR MACHINE LEARNING 631
21.5 AN EXAMPLE: THE LS-1 FAMILY 643
21.6 SUMMARY AND CONCLUSIONS 645
References 646
PART SIX: FORMAL ANALYSIS 650
CHAPTER 22. APPLYING VALIANT'S LEARNING FRAMEWORK TO AI CONCEPT-LEARNING PROBLEMS 652
Abstract 652
22.1 INTRODUCTION 652
22.2 INSTANCE SPACES AND HYPOTHESIS SPACES 653
22.3 VERSION SPACES AND HYPOTHESIS FINDERS 656
22.4 LEARNING ALGORITHMS IN THE SENSE OF VALIANT 659
22.5 PERFORMANCE ANALYSIS 664
22.6 FURTHER RESULTS 669
22.7 CONCLUSION 671
ACKNOWLEDGMENTS 671
APPENDIX COMPUTING UPPER BOUNDS ON .H m) 672
References 677
CHAPTER 23. A NEW APPROACH TO UNSUPERVISED LEARNING IN DETERMINISTIC ENVIRONMENTS 681
Abstract 681
23.1 INTRODUCTION 682
23.2 OUR INFERENCE PROCEDURES 685
23.3 EXPERIMENTAL RESULTS 691
23.4 CONCLUSIONS AND OPEN PROBLEMS 693
ACKNOWLEDGMENTS 694
References 694
BIBLIOGRAPHY OF RECENT MACHINE LEARNING RESEARCH 1985-1989 696
INTRODUCTION 696
ACKNOWLEDGMENTS 698
EXPLANATION OF THE CATEGORIES 699
CROSS-REFERENCE OF THE CATEGORIES 702
REFERENCES 706
ABOUT THE AUTHORS 801
AUTHOR INDEX 812
SUBJECT INDEX 818

Erscheint lt. Verlag 28.6.2014
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Technik Elektrotechnik / Energietechnik
Technik Nachrichtentechnik
ISBN-10 0-08-051055-8 / 0080510558
ISBN-13 978-0-08-051055-2 / 9780080510552
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)
Größe: 96,7 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Praxis-Guide für Künstliche Intelligenz in Unternehmen - Chancen …

von Thomas R. Köhler; Julia Finkeissen

eBook Download (2024)
Campus Verlag
38,99
Wie du KI richtig nutzt - schreiben, recherchieren, Bilder erstellen, …

von Rainer Hattenhauer

eBook Download (2023)
Rheinwerk Computing (Verlag)
24,90