Computational Intelligence (eBook)

for Engineering and Manufacturing
eBook Download: PDF
2007 | 2007
XIII, 212 Seiten
Springer US (Verlag)
978-0-387-37452-9 (ISBN)

Lese- und Medienproben

Computational Intelligence -
Systemvoraussetzungen
96,29 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Computational Intelligence is tolerant of imprecise information, partial truth and uncertainty. This book presents a selected collection of contributions on a focused treatment of important elements of CI, centred on its key element: learning. This book presents novel applications and real world applications working in Manufacturing and Engineering, and it sets a basis for understanding Domotic and Production Methods of the XXI Century.


Computational Intelligence is tolerant of imprecise information, partial truth and uncertainty. This book presents a selected collection of contributions on a focused treatment of important elements of CI, centred on its key element: learning. This book presents novel applications and real world applications working in Manufacturing and Engineering, and it sets a basis for understanding Domotic and Production Methods of the XXI Century.

CONTENTS 7
CONTRIBUTING AUTHORS 9
PREFACE 11
ACKNOWLEDGEMENTS 13
1 SOFT COMPUTING AND ITS APPLICATIONS IN ENGINEERING AND MANUFACTURE 14
INTRODUCTION 14
1. KNOWLEDGE-BASED SYSTEMS 14
2. FUZZY LOGIC 18
3. INDUCTIVE LEARNING 22
4. NEURAL NETWORKS 27
5. GENETIC ALGORITHMS 32
5.1 Representation 33
5.2 Creation of Initial Population 33
5.3 Genetic Operators 34
5.4 Control Parameters 36
5.5 Fitness Evaluation Function 37
6. SOME APPLICATIONS IN ENGINEERING AND MANUFACTURE 38
6.1 Expert Statistical Process Control 38
6.2 Fuzzy Modelling of a Vibratory Sensor for Part Location 38
6.3 Induction of Feature Recognition Rules in a Geometric Reasoning System for Analysing 3D Assembly Models 40
6.4 Neural-network-based Automotive Product Inspection 42
6.5 GA-based Conceptual Design 43
7. CONCLUSION 44
8. ACKNOWLEDGEMENTS 45
REFERENCES 45
2 NEURAL NETWORKS HISTORICAL REVIEW 52
INTRODUCTION 52
1. HISTORICAL PERSPECTIVE 53
1.1 First Computational Model of Nervous Activity: The Model of McCulloch and Pitts 53
1.2 Training of Neural Networks: Hebbian Learning 56
1.3 Supervised Learning: Rosenblatt and Widrow 56
1.4 Partial eclipse of Neural Networks: Minsky and Papert 58
1.5 Backpropagation algorithm: Werbos, Rumelhart et al. and Parker 60
2. NEURAL NETWORKS VS CLASSICAL COMPUTERS 61
3. BIOLOGICAL AND ARTIFICIAL NEURONS 62
3.1 The Biological Neuron 62
3.2 The Artificial Neuron 63
4. NEURAL NETWORKS: CHARACTERISTICS AND TAXONOMY 64
5. FEED FORWARD NEURAL NETWORKS: THE PERCEPTRON 65
5.1 One Layer Perceptron 67
6. LMS LEARNING RULE 68
6.1 The Multilayer Perceptron 69
6.2 Acceleration of the training procedure 72
6.3 On-Line and Off-Line training 73
6.4 Selection of the Network size 74
7. KOHONEN NETWORKS 74
7.1 Training 76
8. FUTURE PERSPECTIVES 77
REFERENCES 77
3 ARTIFICIAL NEURAL NETWORKS 80
INTRODUCTION 80
1. TYPES OF NEURAL NETWORKS 80
1.1 Structural Categorisation 80
1.2 Learning Algorithm Categorisation 81
2. NEURAL NETWORKS EXAMPLE 81
2.1 Multi-layer Perceptron (MLP) 82
2.2 Radial Basis Function (RBF) Network 84
2.3 Learning Vector Quantization (LVQ) Network 87
2.4 CMAC Network 88
2.5 Group Method of Data Handling (GMDH) Network 90
2.6 Hopfield Network 92
2.7 Elman and Jordan Nets 93
2.8 Kohonen Network 95
2.9 ART Networks 96
2.10 Spiking Neural Network 101
3. SUMMARY 104
4. ACKNOWLEDGEMENTS 104
REFERENCES 104
4 APPLICATION OF NEURAL NETWORKS 106
INTRODUCTION 106
1. FEASIBILITY STUDY 107
2. APPLICATION OF NNs TO BINARY DETECTION 107
3. THE NEURAL DETECTOR 109
3.1 The Multi-Layer Perceptron (MLP) 109
3.2 The MLP Structure 110
4. THE TRAINING ALGORITHM 111
4.1 The BackPropagation (BP) Algorithm 111
4.2 The Training Sets 113
5. COMPUTER RESULTS 114
5.1 The Criterion Function 115
5.2 Robustness Under Different Target Models 117
APPENDIX: ON BACKPROPAGATION AND THE CRITERION FUNCTIONS 118
1. THE BACKPROPAGATION ALGORITHM 118
2. LEAST MEAN SQUARES (LMS) 119
3. MINIMUM MISSCLASSIFICATION ERROR (MME) 119
4. THE JM CRITERION 120
REFERENCES 121
5 RADIAL BASIS FUNCTION NETWORKS AND THEIR APPLICATION IN COMMUNICATION SYSTEMS 122
1. RADIAL BASIS FUNCTION NETWORKS 122
2. ARCHITECTURE 124
3. TRAINING ALGORITHMS 125
3.1 Fixed Centers Selected at Random 126
3.2 Self-Organized Selection of Centers 126
3.3 Supervised Selection of Centers 128
3.4 Orthogonal Least Squares (OLS) 130
4. RELATION WITH SUPPORT VECTOR MACHINES (SVM) 132
5. APPLICATIONS OF RADIAL BASIS FUNCTION NETWORKS TO COMMUNICATION SYSTEMS 132
5.1 Antenna Array Signal Processing 133
5.2 Channel Equalization 135
5.3 Other RBF Networks Applications 140
REFERENCES 141
6 BIOLOGICAL CLUES FOR UP-TO-DATE ARTIFICIAL NEURONS 144
INTRODUCTION 144
1. BIOLOGICAL PROPERTIES 145
1.1 Synaptic Plasticity Properties 145
1.2 Neuron’s Properties 149
1.3 Network Properties 150
2. UPDATING MC CULLOCH-PITTS MODEL 152
2.1 Up-to-date Synaptic Model 152
2.2 Up-to-date Neuron Model 154
2.3 Up-to-date Network Model 155
3. JOINING THE BLOCKS: A NEURAL NETWORK MODEL OF THE THALAMUS 156
4. CONCLUSIONS 156
REFERENCES 158
7 SUPPORT VECTOR MACHINES 160
INTRODUCTION 160
1. SVM DEFINITION 161
1.1 Structural Risk 161
1.2 Linear SVM for Separable Data 164
1.3 Karush-Khun-Tucker Conditions 168
1.4 Optimisation Example 169
1.5 Test Phase 172
1.6 Non-Separable Linear Case 172
1.7 Non-Linear Case 175
1.8 Mapping Function Example 177
1.9 Mercer Conditions 179
1.10 Kernel Examples 180
1.11 Global Solutions and Uniqueness 181
1.12 Generalization Performance Analysis 181
2. SVM MATHEMATICAL APLICATIONS 184
2.1 Pattern Recognition 184
2.2 Regression 184
2.3 Principal Component Analysis 190
3. SVM VERSUS NEURAL NETWORKS 193
4. SVM OPTIMISATION METHODS 195
4.1 Optimisation Methods Overview 195
4.2 SMO Algorithm 197
5. CONCLUSIONS 203
6. ACKNOWLEDGEMENTS 204
REFERENCES 204
8 FRACTALS AS PRE-PROCESSING TOOL FOR COMPUTATIONAL INTELLIGENCE APPLICATION 206
INTRODUCTION 206
1. STATE OF THE ART 207
2. FRACTAL CALCULATIONS 209
2.1 Box-counting Method 209
2.2 Dilation Method 209
2.3 Random Walk 210
3. CALCULATION OF GENERALIZED FRACTAL DIMENSIONS 212
3.1 Box-counting Method 212
3.2 Gliding Box Method 214
4. IMAGES FOR THE CASE STUDY 215
5. RESULTS OF THE CASE STUDY AND DISCUSSION 215
5.1 Generating Function with the Box-counting Method 215
5.2 Generalized Dimensions Using the Box-counting Method 218
5.3 Generalized Dimensions Using the Gliding Box Method 218
6. CONCLUSIONS 220
7. ACKNOWLEDGEMENTS 223
REFERENCES 223

CHAPTER 2 NEURAL NETWORKS HISTORICAL REVIEW (p. 39)

D. ANDINA, A. VEGA-CORONA, J. I. SEIJAS, J. TORRES-GARCÍA
Abstract:
This chapter starts with a historical summary of the evolution of Neural Networks from the first models which are very limited in application capabilities to the present ones that make possible to think in applying automatic process to tasks that formerly had been reserved to the human intelligence. After the historical review, Neural Networks are dealt from a computational point of view. This perspective helps to compare Neural Systems with classical Computing Systems and leads to a formal and common presentation that will be used throughout the book

INTRODUCTION
Computers used nowadays can make a great variety of tasks (whenever they are well defined) at a higher speed and with more reliability than those reached by the human beings. None of us will be, for example, able to solve complex mathematical equations at the speed that a personal computer will. Nevertheless, mental capacity of the human beings is still higher than the one of machines in a wide variety of tasks.

No artificial system of image recognition is able to compete with the capacity of a human being to discern between objects of diverse forms and directions, in fact it would not even be able to compete with the capacity of an insect. In the same way, whereas a computer performs an enormous amount of computation and restrictive conditions to recognize, for example, phonemes, an adult human recognizes without no effort words pronounced by different people, at different speeds, accents and intonations, even in the presence of environmental noise.

It is observed that, by means of rules learned from the experience, the human being is much more effective than the computers in the resolution of imprecise problems (ambiguous problems), or problems that require great amount of information. Our brain reaches these objectives by means of thousands of millions of simple cells, called neurons, which are interconnected to each other.

However, it is estimated that the operational amplifiers and logical gates can make operations several orders of magnitude faster than the neurons. If the same processing technique of biological elements were implemented with operational amplifiers and logical gates, one could construct machines relatively cheap and able to process as much information, at least, as the one that processes a biological brain. Of course, we are too far from knowing if these machines will be constructed one day.

Therefore, there are strong reasons to think about the viability to tackle certain problems by means of parallel systems that process information and learn by means of principles taken from the brain systems of living beings. Such systems are called Artificial Neural Networks, connexionist models or distributed parallel process models. Artificial Neural Networks (ANNs or, simply, NNs) come then from the man’s intention of simulating the biological brain system in an artificial way.

1. HISTORICAL PERSPECTIVE
The science of Artificial Neural Networks did his first significant appearance during the 1940’s. Researchers who tried to emulate the functions of the human brain developed physical models (later, simulations by means of programs) of the biological neurons and their interconnections.

Erscheint lt. Verlag 6.5.2007
Zusatzinfo XIII, 212 p.
Verlagsort New York
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Mathematik / Informatik Mathematik Angewandte Mathematik
Naturwissenschaften
Technik Elektrotechnik / Energietechnik
Schlagworte Artificial Intelligence • Artificial Neural Network • Artificial Neural Networks • Communication • Communication Systems • Computational Intelligence • Engineering • fuzzy • Intelligence • learning • Manufacturing • Network • neural network • Neural networks • Signal Processing • Soft Computing • Support Vector Machine • Uncertain
ISBN-10 0-387-37452-3 / 0387374523
ISBN-13 978-0-387-37452-9 / 9780387374529
Haben Sie eine Frage zum Produkt?
Wie bewerten Sie den Artikel?
Bitte geben Sie Ihre Bewertung ein:
Bitte geben Sie Daten ein:
PDFPDF (Wasserzeichen)
Größe: 3,9 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Praxis-Guide für Künstliche Intelligenz in Unternehmen - Chancen …

von Thomas R. Köhler; Julia Finkeissen

eBook Download (2024)
Campus Verlag
38,99