Artificial Neural Networks, 2 -

Artificial Neural Networks, 2 (eBook)

Proceedings of the 1992 International Conference on Artificial Neural Networks (ICANN-92) Brighton, United Kingdom, 4-7 September, 1992

I. Aleksander, J. Taylor (Herausgeber)

eBook Download: PDF
2014 | 1. Auflage
878 Seiten
Elsevier Science (Verlag)
978-1-4832-9806-1 (ISBN)
Systemvoraussetzungen
325,00 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
This two-volume proceedings compilation is a selection of research papers presented at the ICANN-92. The scope of the volumes is interdisciplinary, ranging from the minutiae of VLSI hardware, to new discoveries in neurobiology, through to the workings of the human mind. USA and European research is well represented, including not only new thoughts from old masters but also a large number of first-time authors who are ensuring the continued development of the field.
This two-volume proceedings compilation is a selection of research papers presented at the ICANN-92. The scope of the volumes is interdisciplinary, ranging from the minutiae of VLSI hardware, to new discoveries in neurobiology, through to the workings of the human mind. USA and European research is well represented, including not only new thoughts from old masters but also a large number of first-time authors who are ensuring the continued development of the field.

Front Cover 
1 
Artificialneural 
4 
Copyright Page 
5 
Table of Contents 6
Part I: Neurobiology 1 18
Chapter 1. Temporal sequence storage 20
Abstract 20
1. INTRODUCTION 20
2. TEMPORAL NEURONS 21
3. PREDICTIVE SELF-LEARNING 22
4. TEMPORAL TOPOGRAPHIC MAPS 23
5. BIOLOGICAL REALISATIONS 24
6· REFERENCES 24
CHAPTER 2. GAMMA-BAND AND BETA-BAND CORTICAL OSCILLATIONS 26
ABSTRACT 26
INTRODUCTION 26
PROJECTION-MEDIATED OSCILLATIONS 26
A REVISION OF OUR EARLIER STUDY 27
THE POSSIBLE ROLE OF BETA-BAND AND GAMMA-BAND OSCILLATIONS 28
CONCLUSION 29
REFERENCES 29
Chapter 3. Synchronization and label-switching in networks of laterally 
30 
Abstract 30
1. INTRODUCTION 30
2. NETWORK MODEL AND MEASURE OF SYNCHRONIZATION 30
3. GLOBAL HOMOGENEOUS STIMULATION 31
4. SELECTIVE STIMULATION OF SUBPOPULATIONS 32
5. DISCUSSION 33
REFERENCES 33
Chapter 4. A model for the organisation of neocortical maps 34
Abstract 34
1 Introduction 34
2 The model 34
3 Conclusion 37
References 38
Chapter 5. The Neuronal Computation Time 40
Abstract 40
1. INTRODUCTION 40
2. RESULTS 41
3. DISCUSSION 42
4. CONCLUSION 43
5. REFERENCES 43
Chapter 6. Redundancy reduction of a Gabor representation: a possible computational role for feedback from primary visual cortex to 
44 
Abstract 44
1. PHYSIOLOGICAL BACKGROUND 44
2. ENCODING OF IMAGES WITH GABOR TRANSFORMS 44
4. CORTICO-THALAMIC IMPLEMENTATION 45
5. SIMULATIONS 46
6. IMPLICATIONS OF THE MODEL 47
Acknowledgements 47
7. REFERENCES 47
Chapter 7. Connection weights based on molecular mechanisms 
48 
Abstract 48
1. Introduction 48
2. Model description 49
3. Results 51
4. Acknowledgements 51
5. References 51
CHAPTER 
52 
Abstract 52
1 Introduction 52
2 The model 52
3 Simulation results 53
Acknowledgment 55
References 55
CHAPTER 
56 
Abstract 56
1 Introduction 56
2 The model 56
3 Simulation results 58
4 Discussion 59
Acknowledgment 59
References 59
Chapter 10. A Model of Adaptive Development of Complex Cortical Cells 60
Abstract 60
1. INTRODUCTION 60
2. A BASIC MODEL OF COMPLEX CELL ADAPTATION 60
3. A MORE DETAILED MODEL 61
4. ACKNOWLEDGEMENTS 62
REFERENCES 62
Chapter 11. Activity-Induced "Colour Blob" Formation 64
Abstract 64
1. INTRODUCTION 64
2. PRINCIPAL COMPONENT ANALYSIS 64
3. A NETWORK THAT DEVELOPS "COLOUR BLOBS" 65
4. RESULTS 66
5. ACKNOWLEDGEMENTS 66
REFERENCES 66
Part II: Neurobiology 2 68
Chapter 12. Computational analysis of the operation of a real neuronal network 
70 
Abstract 70
1. HIPPOCAMPAL FUNCTION 70
2. THE HIPPOCAMPAL SYSTEM 71
3. MEMORY CAPACITY OF THE CA3 NETWORK 73
4. THE ROLES OF THE INPUT SYSTEMS 75
REFERENCES 77
Chapter 13. Architectural Consequences of Mapping 3D Space Representations 
78 
Abstract 78
1. INTRODUCTION 78
2. SPACE REPRESENTATION IN A MOVING FRAME 78
3. BIOLOGICAL CONTRAINTS 79
4. EMBEDDING A 3D MECHANISM INTO A 2D ARCHITECTURE 79
5. SIMILARITIES WITH CEREBELLAR STRUCTURES? 81
6. CONCLUSION 81
7. REFERENCES 81
Chapter 14. A Neural Network for Fixation Point Selection based on Spatial Knowledge 82
Abstract 82
1 Introduction 82
2 Model 82
3 Computer simulation 84
4 Discussion 85
References 85
Chapter 15. Phase Transitions, Hysteresis and Overshoot 
86 
Abstract 86
1 Introduction and Summary 86
2 The Model 87
3 Results 87
4 Conclusions and Discussion 89
5 References 89
Chapter 16. A Distributed Model of the Representational States in Classical Conditioning 90
Abstract 90
1. Introduction 90
2. Classical Conditioning 90
3. Classical Conditioning as Unsupervised Sequence Learning 91
4. Simulations 93
5. Conclusion 94
References 94
Chapter 17. A Neural Model of Cortical Cells Characterized by Gabor-like 
96 
Abstract 96
1. INTRODUCTION 96
2. MODELING INTRACORTICAL PROPERTIES 96
3. APPLICATION TO TEXTURE SEGMENTATION 98
4. REFERENCES 99
CHAPTER 18. STOCHASTIC AND OSCILLATORY BURST ACTIVITIES 
100 
Abstract 100
Introduction 100
Simulations and Results 101
References 103
Chapter 19. Reinforcement Learning: On Being Wise During the Event 104
Abstract 104
1. Introduction 104
2. Implementing Backchaining in a Neural Network 105
3. Simulations in a 2D Environment 106
4. Conclusion 107
References 107
Chapter 20. A Simulation of the Gated Thalamo-Cortical 
108 
Abstract 108
1. INTRODUCTION 108
2. THE MODEL 108
3. RESULTS 109
REFERENCES 109
Chapter 21. The Biomagnetic Inverse Problem 112
Abstract 112
1. INTRODUCTION 112
2. EXPECTATION VALUES FOR THE CURRENT SOURCES 113
3. RECONSTRUCTION OF POINT SOURCES 114
4. REFERENCES 115
Chapter 22. Limitations of Logical Reasoning in Neural Networks 
116 
Abstract 116
Logic operations in neural networks 116
Biological 
119 
Conclusion 119
References 119
Part III: Algorithms 1 120
Chapter 23. G-Nets and Learning Recurrent Random Networks 122
Abstract 122
1. INTRODUCTION 122
2. A LEARNING ALGORITHM 124
3. REFERENCES 125
Chapter 24. On a Class of Efficient Learning Algorithms for Multi-Layered Neural Networks 126
Abstract 126
1. BACKPROPAGATING THE DESIRED NET-OUTPUT 126
2. A NEW CLASS OF SUBNET ALGORITHMS 128
3. A TEST-EXAMPLES 129
4. REFERENCES 130
Chapter 25. A hybrid genetic algorithm for training neural networks 132
Abstract 132
1. INTRODUCTION 132
2. THE HYBRID ALGORITHM 133
3. SIMULATION RESULTS 134
4. CONCLUSIONS 134
5. REFERENCES 135
Chapter 26. Matching the topology of a neural net to a particualr problem: Preliminary results using correlation analysis as a pruning tool 136
Abstract 136
1. Inroduction 136
2. Proposed Methodology 136
3. Neural networks - learning 137
4. Results using correlation analysis 137
5. Conclusions 137
6. References 138
CHAPTER 27. PERFORMANCE COMPARISON OF LEARNING ALGORITHMS 
140 
Abstract 140
1. LEARNING IN HOPFIELD NETWORKS 140
2. COMPARISON FOR THE FULLY CONNECTED NETWORK 141
3. LEARNING ON LOCALLY CONNECTED ARCHITECTURES 142
4. REFERENCES 143
Chapter 28. A Study of Maximum Matching on Boltzmann Machines 144
Abstract 144
1 Introduction 144
2 The Boltzmann Machine Model 145
3 The Main Results 146
4 Conclusion 147
References 147
Chapter 29. Generation of Inhibitory Connections to Minimize 
148 
Abstract 148
1 Introduction 148
2 Learning Method with Complexity Term 149
3 Internal and External Entropy 150
4 Results 151
References 151
Chapter 30. The extended quickprop 152
Abstract 152
References 155
Chapter 31. Evolutionary construction algorithms for topology 
158 
Abstract 158
1 INTRODUCTION 158
2 A COEVOLVING NETWORK WITH LOCAL COMPETITION 158
3 T-NETWORKS 160
4 CONCLUSIONS AND OUTLOOK 161
References 161
Chapter 32. SCAWI: an algorithm for weight initialization of a 
162 
Abstract 162
1 Description of the method 162
2 Benchmarks Results 163
References 165
Chapter 33. An Approximation-Based Model of Associative Memory 166
Abstract 166
1. INTRODUCTION 166
2. REGULARIZATION THEORY 166
3. REGULARIZATION AND ASSOCIATIVE MEMORY 167
4. DISCUSSION 168
5. REFERENCES 169
Chapter 34. Pruning Neural Nets by Genetic Algorithm 170
Abstract 170
1. Introduction 170
2. Experiments 171
3. Conclusions 172
Acknowledgements 172
References 173
Chapter 35. 
174 
Abstract 174
1. Introduction 174
2. Presentation of the 
175 
3. 
176 
4. Illustration of performance 176
5. Conclusion and further 
177 
References 177
Chapter 36. Entropy and Generalization in Feedforward Nets 178
abstract 178
1. Layers as Channels 178
2. Two Forms of Generalisation 180
References 181
Chapter 37. Improving Convergence of Back-Propagation by 
182 
Abstract 182
1 Introduction 182
2 Eliminating Flat-Spots in the Output Layer 183
3 Implementation Issues 184
4 Simulation Results 184
5 Discussion 186
6 Conclusion 187
Acknowledgement 188
References 188
Chapter 38. Fast Convergence of Neural Networks 
190 
Abstract 190
1. INTRODUCTION 190
2. DESCRIPTION OF THE PROPOSED ALGORITHM 191
3. EXPERIMENTAL RESULTS 192
4. REFERENCES 192
Part IV: Algorithms 2 194
Chapter 39. The Munificence of High Dimensionality 196
Abstract 196
1 Introduction 196
2 The Geometry of Quasiorthogonal Sets 197
3 The Geometry of Feedforward Neural Network 
200 
References 209
Chapter 40. A Tripartite Framework for Artificial Neural Networks 210
Abstract 210
1. DESCRIBING AND CLASSIFYING NEURAL NETWORKS 210
2. A FRAMEWORK FOR ARTIFICIAL NEURAL NETWORKS (FANN) 211
3. REFERENCES 213
Acknowledgment 213
Chapter 41. A Backpropagation Algorithm for Neural Networks Using Random Pulse 
214 
Abstract 214
1. Introduction 214
2. Derivation of a back-propagation algorithm 215
3. Simulation Results 216
4. Conclusions 217
5. References 217
Chapter 42. A collection of constraint design rules for neural optimization problems 218
Abstract 218
1. Introduction 218
2. Theory 218
3. Collection of design rules 219
4. Constraints converted into more than one k-out-of-n-constraint 220
5. Summary 221
6. References 221
Chapter 43. Optimization of the Rectilinear Steiner Tree 
222 
1. The Steiner Tree Problem 222
2. The Neural Net Model Used 222
3. Results 223
4. Summary 225
5. References 225
Chapter 44. A new learning scheme for dynamic self-adaptation 
226 
Abstract 226
1 Introduction 226
2 Procedure 227
3 Convergence proof 227
4 Performance 227
5 Conclusion 228
Acknowledgements 229
References 229
Chapter 45. Growing Cell Structures – 
230 
Abstract 230
1 Introduction 230
2 Network Architecture 231
3 Network Dynamics 231
4 Computational Simplification 234
5 Conclusion 234
Literature 235
Chapter 46. Distributed generation of synchronous, asynchronous, and 
236 
Abstract 236
1. INTRODUCTION 236
2. A SUMMARY OF THE SER ALGORITHM AND ITS PROPERTIES 236
3. THE SELF-ORGANIZING OSCILLATORY NETWORK (SOON) MODEL 237
4. ASYNCHRONOUS AND PARALLEL UPDATING 238
5. SYNCHRONOUS UPDATING 238
6. PAPS UPDATING 239
7. CONCLUSIONS 240
ACKNOWLEDGEMENTS 240
REFERENCES 240
Chapter 47. Direct Approaches to Improving the Robustness of Multilayer Neural 
242 
Abstract 242
1 Introduction 242
2 Definitions of robustness 242
3 Approach 1: Modification of the error function 243
4 Approach 2: Pruning and duplication 243
5 Discussion 245
6 Conclusion 245
7 References 245
Chapter 48. A Study on Generalization Properties of Artificial Neural Network Using Fahlman and Lebieres Learning Algorithm 246
Abstract 246
1. INTRODUCTION 246
2. DESCRIPTION OF THE ALGORITHM 246
3. SIMULATION RESULTS AND DISCUSSIONS 247
4. CONCLUSIONS 249
5. REFERENCES 249
Chapter 49. Domain Independent Testing and Performance Comparisons for Neural Networks 250
Abstract 250
1. INTRODUCTION 250
2. THE DATA GENERATING PROCESSES 251
3. PARAMETER SETTING FOR THE EXAMPLES 252
4. SIGNIFICANCE OF RESULTS 253
5. DOMAIN DEPENDENCE 254
6. REFERENCES 254
Chapter 50. The Optimal Elastic Net: Finding Solutions to the 
256 
Abstract 256
1 Introduction 256
2 The Optimal Elastic Net Method 258
3 Results 258
Conclusion 259
References 259
Chapter 51. A Bayesian Network for Temporal Segmentation 260
Abstract 260
1. INTRODUCTION 260
2. METHODS 260
3. EXAMPLE 262
4. RESULTS 262
References 263
Chapter 52. Adaptive constrained optimisation for improving the topological 
264 
Abstract 260
1. INTRODUCTION 260
2. METHODS 260
3. EXAMPLE 262
4. RESULTS 262
Chapter 53. Adaptive constrained optimisation for improving the topological maps 264
Abstract 264
1. INTRODUCTION 264
2. ALGORITHMS 264
3. EXPERIMENTS 266
4. DISCUSSION 267
5. REFERENCES 267
Part V: Signal Processing 268
Chapter 54. A Recurrent Neural Network Model 270
Abstract 270
1. Partially recurrent networks 270
2. An illustration example 271
3. Concluding remarks 272
4. References 273
Chapter 55. The Minimum Entropy Neuron- a new building block for clustering transformations 274
Abstract 274
1. INTRODUCTION 274
2. THE MINIMUM ENTROPY PRINCIPLE 275
3. THE MINIMUM ENTROPY NEURON 275
4. EXAMPLE 276
5. DISCUSSION AND CONCLUSION 277
6. REFERENCES 277
Chapter 56. Nonlinear Hebbian Algorithms for Sinusoidal 
278 
Abstract 278
1 Introduction 278
2 Nonlinear algorithms 279
3 Experimental results 279
References 281
Chapter 57. Systolic implementation of the orthogonal-inverse updating algorithm 282
Abstract 282
1. INTRODUCTION 282
2. ORTHOGONAL-INVERSE UPDATING ALGORITHM [9] 283
3. SYSTOLIC IMPLEMENTATION 283
4. REFERENCES 285
Chapter 58. The Neural Impulse Response Filter 286
Abstract 286
1 Introduction 286
2 The NIR Filter 287
3 Magnetoencephalography 289
4 Conclusion 291
Appendix 291
References 291
Chapter 59. Stabilization Properties of Multilayer Feedforward 
292 
Abstract 292
1 Introduction 292
2 The IIR Multilayer Perceptron 292
3 Stability Properties of the IIR MLP 293
4 Stabilization Experiment 294
5 Conclusions 295
6 References 295
Chapter 60. A Measure of Nonlinearity in Time Series Using 
296 
Abstract 296
1. INTRODUCTION 296
2. DEFINITION OF THE MEASURE 296
3. STATISTICAL TEST OF NONLINEARITY 297
4. Examples of Measuring Nonlinearity 297
5. CONCLUSIONS 299
6. REFERENCES 299
Part VI: Pattern Recognition 1 300
Chapter 61. A neural network approach to fault location in nonlinear dc circuits 302
Abstract 302
1. INTRODUCTION AND BACKGROUND 302
2. SIMULATION RESULTS 303
3. CONCLUSIONS 305
4. REFERENCES 305
Chapter 62. Optimization of the Distance-Based Neural Network 
306 
Abstract 306
1. INTRODUCTION 306
2. OPTIMIZATION OF THE DISTANCE-BASED NEURAL NETWORK 307
3. EXPERIMENTAL 309
4. APPLICATION TO FACE RECOGNITION 311
5. CONCLUSION 312
Acknowledgements 312
References 312
Chapter 63. Loss Function Based Neural Classifiers 314
Abstract 314
1 Bayesian formulation of classification problem 314
2 Loss function approximations 315
3 Relationship to other work 316
4 Computational experiments 317
References 317
Chapter 64. Image Compression for Neural Networks using Chebyshev Polynomials 318
Abstract 318
1. INTRODUCTION 318
2. CHEBYSHEV APPROXIMATION 318
3. EQUALISING VARIANCES 319
4. HIGHER DIMENSIONS 320
5. EVEN AND ODD COEFFICIENTS 321
6. WEIGHTED COMPRESSION 321
7. CONCLUSION 321
REFERENCES 321
Chapater 65. Training the Gradient Field of a Dynamic Hopfield (Recurrent) 
322 
1 Introduction 322
2 Algorithm for Design of Basins of Attraction 323
3 Simulation 324
4 Conclusion 324
References 325
Chapter 66. Classification of Epileptic EEG by Using Self-Organizing Maps 326
Abstract 326
1. Introduction 326
2. EEG Analysis Methods 327
3. Feature Classification 327
4. Results and Discussion 328
5. References 329
Chapter 67. Protein Structure Prediction and Neural Networks 330
1. Introduction 330
2. The state of the art 330
3. Current work 331
4. Limitations 331
5. Results 331
6. Conclusions 332
7. References 333
Chapter 68. A Neural Network for Meteor Trail Classification 334
Abstract 334
1. INTRODUCTION 334
2. PREVIOUS METHODS OF METEOR TRAIL CLASSIFICATION 335
3. DATA SOURCE 335
4. NEURAL NETWORK IMPLEMENTATION AND TRAINING 336
5. NETWORK VERIFICATION 336
6. CONCLUSION 337
7. REFERENCES 337
Chapter 69. High-speed Learning in a Supervised, Self-growing 
338 
Abstract 338
1. THE NET 338
2. RESULTS 341
3. REFERENCES 341
CHAPTER 70. PARALLEL PROCESSING SYSTEM WITH FIXED CONNECTIONS AS A NEW 
342 
Abstract 342
1. PREPROCESSING PART OF RECOGNITION SYSTEM 342
2. IDENTIFICATION AS A RECOGNITION PROCEDURE 344
3. PRELIMINARY RESULTS AND CONCLUSION 345
4. REFERENCES 345
CHAPTER 71. A MODIFIED HYPERMAP ARCHITECTURE FOR CLASSIFICATION OF BIOLOGICAL SIGNALS 346
Abstract 346
1. Introduction 346
2. The modified Hypermap Architecture 346
3. The learning algorithm 347
4. Results 348
Acknowledgement 348
References 349
Part VII: Applications and Pattern Recognition 2 350
Chapter 72. Texture image classification 
352 
Abstract 352
1. INTRODUCTION 352
2. STRUCTURE OF THE TEXTURE IMAGE CLASSIFIER 353
3. TRAINING SYSTEM FOR THE CLASSIFICATION NETWORK 354
4. EXPERIMENTAL RESULTS 354
5. Conclusion 355
References 355
Chapter 73. Interframe principal feature extraction 
356 
Abstract 356
1. INTRODUCTION 356
2. PRINCIPAL COMPONENT TRANSFORMATION 357
4. EXPERIMENTAL RESULTS 358
5. CONCLUSION 359
REFERENCES 359
Chapter 74. A method for analyzing decision regions in Learning Vector Quantization algorithms 360
Abstract 360
1. INTRODUCTION 360
2. TYPES OF CODEBOOK VECTORS 361
3. ANALYSIS OF MULTI-DIMENSIONAL DATA 362
4. CONCLUSIONS 363
5. REFERENCES 363
CHAPTER 75. TRACKING PARTICLES IN A HIGH ENERGY PARTICLE DETECTOR USING A 
364 
Abstract 364
1. INTRODUCTION 364
2. THE NETWORK 365
3. RESULT OF A SIMULATION. 366
4. REFERENCES 366
Chapter 76. Hand-written Japanese Kanji character recognition 
368 
ABSTRACT 368
1. INTRODUCTION 368
2. A SELF GROWING NEURAL NETWORK MODEL "CombNET-II" 369
3. HAND-WRITTEN KANJI CHARACTER RECOGNITION 370
4. CONCLUSIONS 370
ACKNOWLEDGMENTS 371
REFERENCES 371
Chapter 77. Development of MLP/LVQ hybrid networks for classification of 
372 
Abstract 372
1. INTRODUCTION 372
2. METHODOLOGY 373
3. RESULTS 374
4. CONCLUSIONS 375
5. REFERENCES 375
Chapter 78. A Deformable Templates Approach for Track 
376 
Abstract 376
1 Introduction 376
2 The Deformable Templates Method 376
3 Track Finding in High Energy Physics 378
4 Improved Gradient Descent 379
5 Summary 379
References 379
Chapter 79. Pattern Recognition with Artificial Neural Networks 
380 
Abstract 380
1 Introduction 380
2 The Problem 380
3 Networks 381
4 Theoretical Considerations 381
5 Simulations 382
6 Results 382
7 Conclusions and Discussion 384
References 384
Chapter 80. Reconstruction of Tokamak Density Profiles using Feedforward Networks 386
Abstract 386
1 Introduction 386
2 Neural Network Approach 387
3 Results and Discussion 389
References 389
CHAPTER 81. PATTERN RECOGNITION WITH THE RANDOM NEURAL NETWORK 390
Abstract 390
1 Introduction 390
2 The Random Neural Network as an auto-associative memory 391
3 The results 392
4 References 393
Chapter 82. Designing Modular Network Architectures Using a Genetic Algorithm 394
Abstract 394
1. INTRODUCTION 394
2. NETWORK DESIGN PRINCIPLES 395
3. SIMULATION AND RESULTS 396
4. REFERENCES 397
Chapter 83. On Clustering Properties of Hierarchical Self-Organizing Maps 398
Abstract 398
1 Introduction 398
2 Multilayer self-organizing map 398
3 Experimental Results 399
References 401
Chapter 84. SOC: A Self-Organizing Classifier 402
Abstract 402
1. INTRODUCTION 402
2. CLASSIFICATION AND SELF-ORGANIZATION 403
3. REFERENCES 405
Chapter 85. Cumulant-Based Neural Network Classifiers 406
Abstract 406
1. INTRODUCTION 406
2. A NEW INVARIANT REPRESENTATION 407
3. STRUCTURED NEURAL NETWORKS FOR IMAGE CLASSIFICATION 408
4. SIMULATION RESULTS 408
5. REFERENCES 409
Chapter 86. Model based approach for generating and structuring a learning database- for real-scale 3D identification tasks 410
Abstract 410
1- The real scale problem. 410
2- Model-based approach to generate a learning database 410
3- Formatting the database. 411
4- Experimental results 412
5- Conclusion 413
6- References 413
Part VIII: Software 414
Chapter 87. Neural Network Programming Environments 416
Abstract 416
1. Introduction 416
2. Application-oriented 418
3. Algorithm-oriented 419
4. General Purpose Programming Systems 420
5. Summary 423
6. References 423
Chapter 88. A Structured Design, Development and Integration Methodology 
424 
Abstract 424
1 INTRODUCTION 424
2 ASSESSMENT PHASE 425
3 SPECIFICATION PHASE: THE APPLICATION DOMAIN 426
4 DESIGN PHASE: ENGINEERING THE SYSTEM 427
5 IMPLEMENTATION PHASE: CONSTRUCTING THE SYSTEM 428
6 EVALUATION PHASE: TESTING THE PERFORMANCE OF THE SYSTEM 429
7 DELIVERY PHASE: INSTALLING THE SYSTEM 429
8 CONCLUSION 430
9 REFERENCES 430
Chapter 89. A new artificial neural network classifier 432
Abstract 432
1 INTRODUCTION 432
2 THE MLNSS NETWORK 433
3 THE MLNSup NETWORK 434
4 RESULTS 435
5 CONCLUSIONS 435
6 ACKNOWLEDGEMENTS 435
7 REFERENCES 435
Chapter 90. Software Package for Multilayer Perceptron 
436 
1 Abstract 436
2 Theoretical Background 436
3 Computer simulation results 438
4 Conclusions 438
References 439
Chapter 91. Efficient Simulation of Massive Neural Networks on Machines with Limited Memory 440
Abstract 440
1. INTRODUCTION 440
2. ACCESS INTO MULTIDIMENSIONAL LAYERS 441
3. A LOOKUP ALGORITHM FOR REGENERATING CONNECTIONS 441
4. PERFORMANCE 442
5. CONCLUSION 443
ACKNOWLEDGEMENTS 443
References 443
Chapter 92. SESAME — A software environment for combining multiple neural 
444 
Abstract 444
Introduction 444
The Concept 444
Class hierarchy 445
SESAME's Network Description Language and Experiment Control 445
Examples: Backpropagation, Feature Maps, and others 445
Conclusion 446
References 446
Chapter 93. 
448 
Abstract 448
1. Introduction 448
2. Design System Overview 448
3. An Example 449
4. Applications of the Design System 451
References 451
Chapter 94. Using a Library of Efficient Data Structures and Algorithms 
452 
Abstract 452
1 Introduction 452
2 Basic Features of LEDA 453
3 LEDA and Neural Networks 454
4 Realizing the Growing Cell Structures with LEDA 454
Chapter 95. Transforming neural network specifications to 
456 
Abstract 456
1. INTRODUCTION 456
2. TRANSFORMATIONS TO OBTAIN A SEQUENTIAL PROGRAM 457
3. TRANSFORMATIONS TO OBTAIN PARALLEL PROGRAMS 459
4. DISCUSSION AND FUTURE WORK 459
5. REFERENCES 460
6 . EMPLOYED BASIC LAWS 460
7. ACKNOWLEDGEMENTS 460
Chapter 
462 
Abstract 462
Parallel Computer Systems 462
N-Tuple Network Implementation 462
Analogue Neural Networks and Genetic Algorithms 464
Parallel Heterogeneous Network Implementation 465
Conclusions 465
Acknowledgements. 465
References 465
Chapter 97. Implementations of Very Large Recurrent ANNs 
466 
Abstract 466
Introduction 466
Remarks on Scaling and Computational Complexity 467
Implementations 468
Discussion 469
References 469
Chapter 
470 
Abstract 470
1. The Recognition of Acoustical Signals 470
2. The Simulator 471
3. Achieved Results and Conclusions 473
4. References 473
Part IX: Cognitive Systems 474
Chapter 
476 
1 Introduction 476
2 A Connectionist Model for Grid Tangrams 477
References 479
Chapter 
480 
Abstract 480
1. PURE ALEXIA 480
2. THE MODEL 481
3. THE SIMULATIONS 482
4. DISCUSSION 483
5. REFERENCES 483
Chapter 
484 
Abstract 484
1. Introduction 484
2. Data representation 484
3. Cognition 485
4. Neural Network Model 485
5. Conclusion 488
References 488
Chapter 
490 
1 Introduction 490
2 Mixed architectures with delayed links 491
3 Counting to 16 items 492
References 492
Chapter 
494 
Abstract 494
1. INTRODUCTION 494
2) THE DOMAINS ACCOUNT - UNSUPERVISED LEARNING 495
3) EXTENSION TO SUPERVISED LEARNING 495
4) COMBINING SUPERVISED AND UNSUPERVISED LEARNING 496
5) CONCLUSION 497
6) REFERENCES 498
Chapter 
500 
Abstract 500
1. INTRODUCTION 500
2. THE 
501 
3. SIMULATION RESULTS 502
4. DISCUSSION 503
5. REFERENCES 503
Chapter 
504 
Abstract 504
1. Introduction 504
2. Disordered CA architecture 504
3. Basins of Attraction 505
4. Computing Pre — images 506
5. Brain—Like Computation 508
6. A Mind Model 508
7. Learning 508
References 509
Chapter 
510 
Abstract 510
1 INTRODUCTION 510
2 SEQUENTIAL LEARNING 510
3 MEAN FIELD AUTOASSOCIATORS 511
4 CONCLUSIONS 511
5 REFERENCES 512
CHAPTER 
514 
Abstract 514
1. INTRODUCTION 514
2. A MODEL OF DELETION AND COMPENSATION 514
3. SOLUTION OF THE MODEL 515
4. DISCUSSION 516
REFERENCES 517
Chapter 
518 
Abstract 518
1. INTRODUCTION 518
2. CROSS-TARGETS 519
3. FROM A SIMPLE 
520 
4. TO MORE COMPLEX SUBJECTS 520
5. FURTHER DEVELOPMENTS 521
6. REFERENCES 521
Chapter 
522 
Abstract 522
1. INTRODUCTION 522
2. THE GENERAL FRAMEWORK 523
3. THE LEARNING PARADIGMS 523
4. CONCLUSION 525
5. REFERENCES 525
Chapter 
526 
Abstract 526
1. Background 526
2. Introduction 527
3. The Model 528
4. Results 528
5. Discussion and Conclusions 528
6. References 529
Chapter 
530 
Abstract 530
1. INTRODUCTION 530
2. A MODEL OF LANGUAGE ACQUISITION 530
3. SEMANTIC SUBSYSTEM 531
4. CONCLUSIONS 533
5. FURTHER WORK 534
6. ACKNOWLEDGEMENTS 534
7. REFERENCES 534
Chapter 
536 
Abstract 536
1. INTRODUCTION 536
2. THE ARCHITECTURE 536
3. OPERATION OF THE MODEL 537
4. DISCUSSION AND CONCLUSIONS 538
5. APPENDIX A - Representing continuous values 539
6. APPENDIX B - Experimental Parameters 539
6. REFERENCES 539
Chapter 113. A Connectionist Approach to Anaphora Resolution 
540 
Abstract 540
1. RESEARCH AIM 540
2. OUTLINING THE SOLUTION 542
References 543
Chapter 
544 
Abstract 544
1. INTRODUCTION 544
2. NETWORK SIMULATIONS 546
3. BENCHMARKING NETWORK PERFORMANCE 547
4. CONCLUSION 547
5. REFERENCES 547
Chapter 
548 
Abstract 548
Introduction 548
Architecture 548
Simulations 549
Results 549
Analysis and discussion 550
Acknowledgements 551
References 551
Chapter 
552 
Abstract 552
1 Introduction 552
2 Adaptive Junction 552
3 Simulation 553
4 Discussion 553
5 Conclusion 555
References 555
Chapter 
556 
Abstract 556
1. INTRODUCTION 556
2. DESCRIPTION AND REQUIREMENTS OF AN APPROPRIATE FRAME 
556 
3. COMPUTATIONAL MODEL 557
4. RESULTS AND DISCUSSION 558
Acknowledgements 558
REFERENCES 559
Chapter 118. A Neural Network Architecture which uses Cellular Automata to get Context-Specific Associative 
560 
Abstract 560
1. Introduction 560
2. The used neural network model with Cellular Automata 
561 
3. Building up a context-specific associative memory 562
4. Simulation results 563
References 563
Chapter 
564 
Abstract 564
1 INTRODUCTION 564
2 MODEL DESCRIPTION 565
3 Network Equations 567
4 Simulation Results 568
Acknowledgements 568
References 568
Chapter 
570 
Abstract 570
1. INTRODUCTION 570
2. TWO THEORETICAL FRAMEWORKS 570
3. DISTRIBUTED REPRESENTATION 571
4. MINIMAL NETWORKS 572
5. PRACTICAL IMPLICATIONS 573
6. NOTES 573
7. REFERENCES 573
Part XI: Hardware 574
Chapter 121. To simulate or not to 
576 
Abstract 576
1. INTRODUCTION 576
3. Implications for hardware systems 579
4. Comparison of performance of simulations 579
Character recognition system 580
6. Conclusions 581
7. Acknowledgements 582
8. References 582
Chapater 
584 
Abstract 584
1. THE PROBLEM 584
2. THE NETWORK 585
3. THE NEURON 586
4. REFERENCES 587
Chapter 
588 
Abstract 588
1. INTRODUCTION 588
2. EXPLOITING THE CONNECTION PARALLELISM 589
3. THE ON-TREE SOLUTION: EXPERIMENTAL FINDINGS 590
4. CONCLUDING REMARKS 592
REFERENCES 592
Chapter 
594 
Abstract 594
1. INTRODUCTION 594
2. HOST TIMING CONSTRAINTS 595
3. A SIMPLIFIED INSTRUCTION SET 596
4. REFERENCES 597
Chapter 
598 
Abstract 598
1 Introduction 598
2 The Architecture of NAND-Nets 598
3 NAND-Nets and Learning 599
4 Discussion of NAND-Nets 600
5 Conclusions 601
References 601
Chapater 126. Adaptable VLSI Neural Network of Tens of 
602 
ABSTRACT 602
1. INTRODUCTION 602
2. ANALOGUE-DIGITAL HYBRID VLSI NEURAL 
603 
3. 
605 
4. REFERENCES 605
Chapter 127. 
606 
Abstract 606
1. PCA Neural Networks 606
2. The Weighted Subspace Network 606
3. Implementation with a Digital Signal Processor 607
4. Special-Purpose Bit-Serial Hardware with Parallel Extension Capability 607
5. References 609
CHAPTER 128. VLSI ARCHITECTURE OF THE SELF-ORGANIZING NEURAL NETWORK USING SYNCHRONOUS 
610 
Abstract 610
1. INTRODUCTION 610
2. ARCHITECTURE 610
3. CONCLUSIONS 613
REFERENCES 613
CHAPTER 129. A PROCESSOR RING FOR THE IMPLEMENTATION OF NEURAL 
614 
Abstract 614
1. Introduction 614
2. Description of the Processor Ring 615
3. Performance and Discussion 616
4. References 617
Chapter 130 
618 
1. INTRODUCTION 618
2. THE DYNAMIC RING 
618 
3. NEURAL NETWORK EMULATION 619
4. THE UTAK1 PROCESSOR 621
5. REFERENCES 621
CHAPTER 131. PARALLEL HARDWARE IMPLEMENTATION OF THE 
622 
Abstract 622
1. BACKGROUND AND MOTIVATION 622
2. PARALLEL IMPLEMENTATION 622
3. DISCUSSION 624
4. REFERENCES 625
Chapter 
626 
Abstract 626
1. INTRODUCTION 626
2. THEORETICAL ASPECTS 626
3. OPTIMAL IMPLEMENTATION 627
4. APPROXIMATE MULTIPLICATION 628
5. CONCLUSIONS 629
6. REFERENCES 629
Chapter 
630 
Abstract 630
1 Introduction 630
2 The implementation concepts of the neural coprocessor 631
3 The memory arithmetic board (MAB) 632
4 System integration of the neural coprocessor 632
5 Results 632
6 Summary and Conclusions 633
References 633
Chapter 134. RENNS - a REconfigurable Neural Network 
634 
Abstract 634
Introduction and motivation 634
Description of RENNS 634
Levels of reconfiguration 635
Possible configurations 636
Neural network application development on RENNS 636
Status and future work 638
References 638
Chapter 
640 
Abstract 640
1. BUILDING BLOCK 640
2. NAC APPLICATIONS 642
3. CONCLUSION 643
4. REFERENCES 643
Chapter 
644 
Abstract 644
1 Introduction 644
2 Neural Autoassociation for Image Compression 644
3 The Associative String Processor 645
4 Parallel Implementation 646
5 Conclusions 647
References 647
Chapter 
648 
Abstract 648
1. ALGORITHM 648
2. FUNCTIONAL CELLS 648
3 . RANDOM GENERATOR 650
4. EXPERIMENTAL RESULTS 650
5. CONCLUSION 650
6. REFERENCES 651
Part XII: Commercial / Industrial Hardware Systems 652
Chapter 
654 
Abstract 654
1. INTRODUCTION 654
2. THE TEMPORALITY OF THE pRAM TNLI MODEL 654
3. DIGITAL HARDWARE IMPLEMENTATION 655
4. ASPECTS OF TRAINING AND LEARNING 657
6. CONCLUSION 657
7. REFERENCES 657
Chapter 139. Optical high order feedback neural network 
658 
Abstract 658
1. INTRODUCTION 658
2. THE HIGH ORDER FEEDBACK NEURAL NET (HOFNET) 659
3. OPTICAL HOFNET SYSTEM 659
4. SIMULATIONS AND CONCLUSIONS 661
5. REFERENCES 661
Chapter 
662 
Abstract 662
1. INTRODUCTION 662
2. THE FRACTAL ARCHITECTURE 663
3. EFFICIENCY OF FRACTAL ARCHITECTURES 663
4. IMPLEMENTATION PRINCIPLES 664
5. CONCLUDING REMARKS 665
6. REFERENCES 665
Chapter 
666 
Abstract 666
1 Introduction 666
2 Background 666
3 PDC within a state machine 667
4 Generalizing PDC 668
5 Conclusions 669
6 References 669
Chapter 
670 
ABSTRACT 670
1. INTRODUCTION 670
2. THE ELEMENTARY NEURAL CELL 671
3. ARRAY ARCHITECTURE 671
References 672
Chapter 
674 
Abstract 674
1. INTRODUCTION 674
2. RADIAL BASE NETWORK 674
3. PARAMETER ESTIMATION 675
4. DYNAMIC FAULT DETECTION 676
5. SIMULATION EXAMPLE 676
6. CONCLUSIONS 677
7. REFERENCES 677
Chapter 
678 
Abstract 678
1. INTRODUCTION 678
2. THE CONSTRAINT SATISFACTION NETWORK 678
3. SATELLITE COVERAGE PLANS OPTIMIZATION 679
4. CONCLUSIONS 680
5. REFERENCE 680
Chapter 
682 
Abstract 682
1. INTRODUCTION 682
2. CONNECTIONIST MODELS FOR VISION 682
3. AUTOMATIC DEFECT CLASSIFICATION FOR QUALITY CONTROL 683
4. DESCRIPTION OF THE MODEL 684
5. RESULTS 685
6. DISCUSSION 686
8. REFERENCES 686
CHAPTER 
688 
Abstract 688
1. Texture Segmentation with Neural Networks 688
2. Experimental Setup and Results 689
Conclusion 691
References 691
Chatper 147. A Comparison Between Chemotaxis and Back-Propagation 
692 
Abstract 692
1. INTRODUCTION 692
2. BACK-PROPAGATION NETWORKS AND RECIPE PREDICTION 693
3. CHEMOTAXIS: A SIMPLER LEARNING ALGORITHM 694
4. CONCLUSION 695
5. REFERENCES 695
Chapter 
696 
Abstract 696
1. INTRODUCTION 696
2. THE SEWAGE PUMP STATION PLANT SYSTEM 696
3. PUMP OPERATION BY MEANS OF SFK 697
4. WATER LEVEL PATTERNS GENERATED BY MEANS OF DFK 698
5. CONCLUSION 699
REFERENCES 699
CHAPTER 
700 
Abstract 700
1. Introduction 700
2. Large scale application 700
3. Hybridation of statistical and ANN approaches 702
References 703
CHAPTER 
704 
Abstract 704
1. Introduction 704
2. An ordering theorem for synaptic changes of finite size. 706
3. Discussion 707
References 708
Chapter 
710 
Abstract 710
1. INTRODUCTION 710
2. EXPERIMENTS 711
3. CONCLUSIONS 712
References 712
Chapater 
714 
Abstract 714
1. INTRODUCTION AND EVALUATION OF THE EXISTING SAW CONTROLLER 714
2. DESIGN OF THE HARDWARE PREPROCESSOR 715
3. EXPERIMENTS WITH THE PROPOSED ARCHITECTURE 716
4. DISCUSSION 716
5. CONCLUSIONS. 717
6. REFERENCES 717
Chapter 
718 
Abstract 718
1. INTRODUCTION 718
2. DATA COLLECTION METHOD 718
3. NEURAL NETWORK ARCHITECTURE 719
4. TRAINING AND OPTIMISATION 721
5. RESULTS 721
6. CONCLUSIONS 721
7. REFERENCES 722
Part XIII: Algorithms and Applications 1 724
Chapter 154. 
726 
Abstract 726
1 Introduction 726
2 Provably Reliable and Efficient Neural Classifiers 728
3 Neural Improvement of Approximations of Realvalued 
731 
4 Dynamical Selection of Topologies for Reliable Networks 734
5 Conclusion 736
6 Acknowledgements 736
References 736
CHAPTER 155. AUTOMATED RADAR BEHAVIOUR ANALYSIS USING HIERARCHICAL 
738 
Abstract 738
1. INTRODUCTION 738
2 PROJECT BACKGROUND 738
3. BACKGROUND: DATA FUSION 739
4. ASSESSMENT: ANN FOR DATA FUSION 739
5 SPECIFICATION: TRACKER'S FUNCTIONALITY 740
6. DESIGN AND IMPLEMENTATION: THE TRACKER DEMONSTRATOR 740
7 EVALUATION 742
8 CONCLUSION 742
REFERENCES 743
Chapter 156. 
744 
Abstract 744
1. Introduction 744
2. Neural Network Approach 744
3. Simulation Results and Conclusion 746
4. References 747
Chapter 
748 
Abstract 748
1. INTRODUCTION 748
2. THE BPCLS ALGORITHM 749
3. THE EFFICIENCY OF BPCLS 749
4. BACK-PROPAGATION MODEL IN BPCLS 750
5. CONCLUSION 751
6. REFERENCES 751
Chapter 
752 
Abstract 752
1. One-dimensional Case 752
References 756
Chapter 
758 
Abstract 758
1. INTRODUCTION 758
2. RECORDING AND LOGARITHMIC TRANSFORMATION 759
3. CODING AND LINKING PROCESS 759
4. CLASSIFICATION 759
5. RESULTS 760
6. CONCLUSION 761
7. REFERENCES 761
Chapter 
762 
Abstract 762
KOHONEN MAPS AND DATA ANALYSIS 762
A SELECTION 
763 
CONCLUSION 765
REFERENCES 765
Chapter 
766 
1. Introduction 766
2. Subclass Selection. 767
3. Results. 769
4. Conclusions. 769
5. References 769
Chapter 162. Optical Character Recognition and Cooperating Neural Networks 
770 
Abstract 770
1. Introduction 770
2. Description of the OCR process 770
3. Design of the data base 771
4. Description of the networks and experiments 771
5. Results 772
6. Conclusion 773
7. Bibliography 773
Chapter 
774 
Abstract 774
1. Introduction 774
2. Image feature extraction using MLP 774
3. Feature classification 775
4. Experimental results 776
5. Conclusion 777
Bibliography 777
Chapter 
778 
Abstract 778
1. Introduction 778
2. Multiresolution Analysis 778
3. An identity recognition network for segmentation 779
4. A two class network 779
5. Conclusion 781
Bibliography 781
Chapter 
782 
Abstract 782
1. INTRODUCTION 782
2. Physical methods 783
3. The use of neural networks 783
4. Results 784
Acknowledgements 785
References 785
Chapter 166. 
786 
Abstract 786
1. THE NETWORKS 786
2. EXPERIMENTAL RESULTS 788
3. DISCUSSION 788
ACKNOWLEDGEMENTS 789
REFERENCES 789
Chapter 167. Integration of a connectionist model in information retrieval 
790 
Abstract 790
1. INTRODUCTION 790
2. AN ASSOCIATIVE MODEL FOR KNOWLEDGE REPRESENTATION 790
3. ASSOCIATIVE INFORMATION RESEARCH 792
4. THE DIFFERENT LEARNING METHODS 792
5. CONCLUSION 793
6. REFERENCES 793
Chapter 
794 
Abstract 794
1. INTRODUCTION 794
2. DATA PRE-PROCESSING 795
3. TRAINING 796
4. RESULTS 796
5. CONCLUSION 797
6. REFERENCES 797
Part XIV: Algorithms and Applications 2 798
Chapter 
800 
Abstract 800
1 Introduction 800
2 Self-Organizing Feature Map 
801 
3 Applications 802
References 806
Chapter 170. Prediction of monthly transition of the composition stock price index using recurrent 
808 
Abstract 808
1. INTRODUCTION 808
2. FORMATION OF THE PROBLEM 808
3. THE NETWORKS 809
4. EXPERIMENTS 810
5. CONCLUSION 810
6. REFERENCES 811
Chapter 
812 
Abstract 812
1. THE PROBLEM - INTERMEDIATE PROTOTYPES 812
2. THE SOLUTION - EUCLIDNET 813
3. REFERENCES 815
CHAPTER 
816 
Abstract 816
1. INTRODUCTION 816
2. METHODS AND RESULTS 816
3. DISCUSSION 819
4. REFERENCES 819
CHAPTER 
820 
Abstract 820
1. INTRODUCTION 820
2. PATTERN RECOGNITION APPROACH 821
3. NEURAL NETWORK APPROACH 821
4. NEURAL NETWORK PATTERN CLASSIFICATION 821
5. CONCLUSIONS 822
6. REFERENCES 823
Chapter 
824 
Abstract 824
1 INTRODUCTION 824
2 REPRESENTATION OF THE RULES 824
3 GENERALIZING THE RULES 825
4 CHOOSING WHAT RULE TO USE 825
5 ACKNOWLEDGEMENTS 825
6 REFERENCES 825
Chapter 
826 
Abstract 826
1 Introduction 826
2 Simulations 826
3 Related Research 828
4 Connectionist versus Symbolic Pattern Matching 828
5 Conclusion 829
6 References 829
6 References 829
Chapte 176. Temporal Sparse Distributed Memory: Identifying Temporal Patterns via 
830 
Abstract 830
Introduction 830
Background and Notation 830
Notation 831
Storage and Pattern Identification (continuous addresses) 831
Discrete Addressing 832
Bibliography 833
Chapter 
834 
Abstract 834
Introduction 834
Method 835
Example Graphs 835
Results 836
Conclusions 837
References 837
Chapter 
838 
Abstract 838
1. INTRODUCTION 838
2. NETWORK ARCHITECTURE 839
3. IMPLEMENTATION AND PERFORMANCE 841
4. REFERENCES 841
Chapter 
842 
Abstract 842
1. INTRODUCTION 842
2. A NEW SEQUENTIAL NETWORK MODEL 842
3. IMPLEMENTATION AND PERFORMANCE 845
REFERENCES 845
Chapter 180. Quadratic load flow calculation in electric power systems 
846 
Abstract 846
1. INTRODUCTION 846
2. QUADRATIC LOAD FLOW EQUATION 846
2.QP-BASED LOAD FLOW CALCULATION 847
3. APPLICATION OF A HOPFIELD MODEL TO LOAD FLOW CALCULATION 848
4. CONCLUSION 849
5. REFERENCES 849
Chapter 
850 
Abstract 850
1. BASIC SYSTEM STRUCTURE 850
2. TOWARDS A HYBRID SYSTEM 850
3. VEHICLE-SANN 851
4. TRAINING FILES 852
5. EXPERIMENTS AND RESULTS 852
6. SUMMARY 853
7. Literatur 853
Chapater 
854 
Abstract 854
1. INTRODUCTION 854
2. A NEW INTERPRETATION OF MFA 855
3. SCALE SPACE AND ANNEALING 858
4. RELATIONSHIP TO NEURAL NETWORKS 859
5. CONCLUSION: HOW TO DO IMAGE SMOOTHING 859
6. REFERENCES 861
Chapter 183. NES: a neural shell for diagnostic expert 
864 
Abstract 864
1 Motivations 864
2 NES 865
3 Limitations and extensions 869
4 An application 869
References 871
AUTHOR INDEX 872

Erscheint lt. Verlag 28.6.2014
Sprache englisch
Themenwelt Informatik Grafik / Design Digitale Bildverarbeitung
Mathematik / Informatik Informatik Netzwerke
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
ISBN-10 1-4832-9806-X / 148329806X
ISBN-13 978-1-4832-9806-1 / 9781483298061
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)
Größe: 64,4 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Explore powerful modeling and character creation techniques used for …

von Lukas Kutschera

eBook Download (2024)
Packt Publishing (Verlag)
43,19
Discover the smart way to polish your digital imagery skills by …

von Gary Bradley

eBook Download (2024)
Packt Publishing (Verlag)
45,59
Generate creative images from text prompts and seamlessly integrate …

von Margarida Barreto

eBook Download (2024)
Packt Publishing (Verlag)
32,39