Neuromorphic Computing and Beyond (eBook)

Parallel, Approximation, Near Memory, and Quantum
eBook Download: PDF
2020 | 1st ed. 2020
XIV, 233 Seiten
Springer International Publishing (Verlag)
978-3-030-37224-8 (ISBN)

Lese- und Medienproben

Neuromorphic Computing and Beyond - Khaled Salah Mohamed
Systemvoraussetzungen
69,54 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

This book discusses and compares several new trends that can be used to overcome Moore's law limitations, including Neuromorphic, Approximate, Parallel, In Memory, and Quantum Computing.  The author shows how these paradigms are used to enhance computing capability as developers face the practical and physical limitations of scaling, while the demand for computing power keeps increasing.  The discussion includes a state-of-the-art overview and the essential details of each of these paradigms.  



Khaled Salah Mohamed attended the school of engineering, Department of Electronics and Communications at Ain-Shams University from 1998 to 2003, where he received his B.Sc. degree in Electronics and Communications Engineering with distinction and honors. He received his Masters degree in Electronics from Cairo University, Egypt in 2008. He received his PhD degree in 2012. Dr. Khaled Salah is currently a Technical Lead at the Emulation division at Mentor Graphic, Egypt. Dr. Khaled Salah has published a large number of papers in in the top refereed journals and conferences. His research interests are in 3D integration, IP Modeling, and SoC design.

Preface 6
Contents 8
Chapter 1: An Introduction: New Trends in Computing 14
1.1 Introduction 14
1.1.1 Power Wall 15
1.1.2 Frequency Wall 16
1.1.3 Memory Wall 16
1.2 Classical Computing 16
1.2.1 Classical Computing Generations 17
1.2.2 Types of Computers 18
1.3 Computers Architectures 19
1.3.1 Instruction Set Architecture (ISA) 19
1.3.2 Different Computer Architecture 21
1.3.2.1 Von-Neumann Architecture: General-Purpose Processors 21
1.3.2.2 Harvard Architecture 23
1.3.2.3 Modified Harvard Architecture 23
1.3.2.4 Superscalar Architecture: Parallel Architecture 23
1.3.2.5 VLIW Architecture: Parallel Architecture 24
1.4 New Trends in Computing 25
1.5 Conclusions 26
References 26
Chapter 2: Numerical Computing 27
2.1 Introduction 27
2.2 Numerical Analysis for Electronics 28
2.2.1 Why EDA 28
2.2.2 Applications of Numerical Analysis 30
2.2.3 Approximation Theory 31
2.3 Different Methods for Solving PDEs and ODEs 32
2.3.1 Iterative Methods for Solving PDEs and ODEs 34
2.3.1.1 Finite Difference Method (Discretization) 34
2.3.1.2 Finite Element Method (Discretization) 34
2.3.1.3 Legendre Polynomials 35
2.3.2 Hybrid Methods for Solving PDEs and ODEs 36
2.3.3 ML-Based Methods for Solving ODEs and PDEs 36
2.3.4 How to Choose a Method for Solving PDEs and ODEs 37
2.4 Different Methods for Solving SNLEs 38
2.4.1 Iterative Methods for Solving SNLEs 39
2.4.1.1 Newton Method and Newton–Raphson Method 39
2.4.1.2 Quasi-Newton Method aka Broyden’s Method 42
2.4.1.3 The Secant Method 45
2.4.1.4 The Muller Method 46
2.4.2 Hybrid Methods for Solving SNLEs 47
2.4.3 ML-Based Methods for Solving SNLEs 47
2.4.4 How to Choose a Method for Solving Nonlinear Equations 47
2.5 Different Methods for Solving SLEs 48
2.5.1 Direct Methods for Solving SLEs 49
2.5.1.1 Cramer’s Rule Method 49
2.5.1.2 Gaussian Elimination Method 50
2.5.1.3 Gauss–Jordan (GJ) Elimination Method 53
2.5.1.4 LU Decomposition Method 54
2.5.1.5 Cholesky Decomposition Method 55
2.5.2 Iterative Methods for Solving SLEs 55
2.5.2.1 Jacobi Method 56
2.5.2.2 Gauss–Seidel Method 57
2.5.2.3 Successive Over-Relaxation (SOR) Method 57
2.5.2.4 Conjugate Gradient Method 58
2.5.2.5 Bi-conjugate Gradient Method 59
2.5.2.6 Generalized Minimal Residual Method 60
2.5.3 Hybrid Methods for Solving SLEs 60
2.5.4 ML-Based Methods for Solving SLEs 61
2.5.5 How to Choose a Method for Solving Linear Equations 61
2.6 Common Hardware Architecture for Different Numerical Solver Methods 62
2.7 Software Implementation for Different Numerical Solver Methods 65
2.7.1 Cramer’s Rule: Python-Implementation 65
2.7.2 Newton–Raphson: C-Implementation 66
2.7.3 Gauss Elimination: Python-Implementation 67
2.7.4 Conjugate Gradient: MATLAB-Implementation 68
2.7.5 GMRES: MATLAB-Implementation 69
2.7.6 Cholesky: MATLAB-Implementation 70
2.8 Conclusions 71
References 71
Chapter 3: Parallel Computing: OpenMP, MPI, and CUDA 74
3.1 Introduction 74
3.1.1 Concepts 75
3.1.2 Category of Processors: Flynn’s Taxonomy/Classification (1966) 76
3.1.2.1 Von-Neumann Architecture (SISD) 76
3.1.2.2 SIMD 77
3.1.2.3 MISD 78
3.1.2.4 MIMD 79
3.1.3 Category of Processors: Soft/Hard/Firm 80
3.1.4 Memory: Shared-Memory vs. Distributed Memory 80
3.1.5 Interconnects: Between Processors and Memory 83
3.1.6 Parallel Computing: Pros and Cons 83
3.2 Parallel Computing: Programming 84
3.2.1 Typical Steps for Constructing a Parallel Algorithm 84
3.2.2 Levels of Parallelism 85
3.2.2.1 Processor: Architecture Point of View 85
3.2.2.2 Programmer Point of View 85
3.3 Open Specifications for Multiprocessing (OpenMP) for Shared Memory 86
3.4 Message-Passing Interface (MPI) for Distributed Memory 88
3.5 GPU 89
3.5.1 GPU Introduction 89
3.5.2 GPGPU 90
3.5.3 GPU Programming 91
3.5.3.1 CUDA 91
3.5.4 GPU Hardware 94
3.5.4.1 The Parallella Board 94
3.6 Parallel Computing: Overheads 94
3.7 Parallel Computing: Performance 95
3.8 New Trends in Parallel Computing 101
3.8.1 3D Processors 101
3.8.2 Network on Chip 102
3.8.3 FCUDA 103
3.9 Conclusions 103
References 103
Chapter 4: Deep Learning and Cognitive Computing: Pillars and Ladders 105
4.1 Introduction 105
4.1.1 Artificial Intelligence 105
4.1.2 Machine Learning 107
4.1.2.1 Supervised Machine Learning 109
4.1.2.2 Unsupervised Machine Learning 110
4.1.2.3 Reinforcement Machine Learning 111
4.1.3 Neural Network and Deep Learning 112
4.2 Deep Learning: Basics 114
4.2.1 DL: What? Deep vs. Shallow 114
4.2.2 DL: Why? Applications 116
4.2.3 DL: How? 116
4.2.4 DL: Frameworks and Tools 121
4.2.4.1 TensorFlow 122
4.2.4.2 Keras 123
4.2.4.3 PyTorch 123
4.2.4.4 OpenCV 124
4.2.4.5 Others 125
4.2.5 DL: Hardware 125
4.3 Deep Learning: Different Models 125
4.3.1 Feedforward Neural Network 125
4.3.1.1 Single-Layer Perceptron(SLP) 127
4.3.1.2 Multilayer Perceptron (MLP) 128
4.3.1.3 Radial Basis Function Neural Network 129
4.3.2 Recurrent Neural Network (RNNs) 129
4.3.2.1 LSTMs 130
4.3.2.2 GRUs 131
4.3.3 Convolutional Neural Network (CNNs): Feedforward 131
4.3.4 Generative Adversarial Network (GAN) 135
4.3.5 Auto Encoders Neural Network 136
4.3.6 Spiking Neural Network 138
4.3.7 Other Types of Neural Network 139
4.3.7.1 Hopfield Networks 139
4.3.7.2 Boltzmann Machine 141
4.3.7.3 Restricted Boltzmann Machine 141
4.3.7.4 Deep Belief Network 141
4.3.7.5 Associative NN 141
4.4 Challenges for Deep Learning 141
4.4.1 Overfitting 141
4.4.2 Underfitting 142
4.5 Advances in Neuromorphic Computing 142
4.5.1 Transfer Learning 142
4.5.2 Quantum Machine Learning 144
4.6 Applications of Deep Learning 144
4.6.1 Object Detection 144
4.6.2 Visual Tracking 148
4.6.3 Natural Language Processing 148
4.6.4 Digits Recognition 149
4.6.5 Emotions Recognition 149
4.6.6 Gesture Recognition 149
4.6.7 Machine Learning for Communications 150
4.7 Cognitive Computing: An Introduction 150
4.8 Conclusions 152
References 152
Chapter 5: Approximate Computing: Towards Ultra-Low-Power Systems Design 156
5.1 Introduction 156
5.2 Hardware-Level Approximation Techniques 158
5.2.1 Transistor-Level Approximations 158
5.2.2 Circuit-Level Approximations 159
5.2.3 Gate-Level Approximations 160
5.2.3.1 Approximate Multiplier Using Approximate Computing 160
5.2.3.2 Approximate Multiplier Using Stochastic/Probabilistic Computing 160
5.2.4 RTL-Level Approximations 161
5.2.4.1 Iterative Algorithms 161
5.2.5 Algorithm-Level Approximations 162
5.2.5.1 Iterative Algorithms 162
5.2.5.2 High-Level Synthesis (HLS) Approximations 162
5.2.6 Device-Level Approximations: Memristor-Based Approximate Matrix Multiplier 164
5.3 Software-Level Approximation Techniques 164
5.3.1 Loop Perforation 164
5.3.2 Precision Scaling 165
5.3.3 Synchronization Elision 165
5.4 Data-Level Approximation Techniques 165
5.4.1 STT-MRAM 165
5.4.2 Processing in Memory (PIM) 165
5.4.3 Lossy Compression 166
5.5 Evaluation: Case Studies 166
5.5.1 Image Processing as a Case Study 166
5.5.2 CORDIC Algorithm as a Case Study 166
5.5.3 HEVC Algorithm as a Case Study 169
5.5.4 Software-Based Fault Tolerance Approximation 171
5.6 Conclusions 171
References 172
Chapter 6: Near-Memory/In-Memory Computing: Pillars and Ladders 175
6.1 Introduction 175
6.2 Classical Computing: Processor-Centric Approach 176
6.3 Near-Memory Computing: Data-Centric Approach 178
6.3.1 HMC 179
6.3.2 WideIO 181
6.3.3 HBM 181
6.4 In-Memory Computing: Data-Centric Approach 181
6.4.1 Memristor-Based PIM 181
6.4.2 PCM-Based PIM 182
6.4.3 ReRAM-Based PIM 184
6.4.4 STT-RAM-Based PIM 185
6.4.5 FeRAM-Based PIM 185
6.4.6 NRAM-Based PIM 186
6.4.7 Comparison Between Different New Memories 187
6.5 Techniques to Enhance DRAM Memory Controllers 187
6.5.1 Techniques to Overcome the DRAM-Wall 189
6.5.1.1 Low-Power Techniques in DRAM Interfaces 189
6.5.1.2 High-Bandwidth and Low Latency Techniques in DRAM Interfaces 191
6.5.1.3 High-Capacity and Small Footprint Techniques in DRAM Interfaces 192
6.6 Conclusions 192
References 193
Chapter 7: Quantum Computing and DNA Computing: Beyond Conventional Approaches 195
7.1 Introduction: Beyond CMOS 195
7.2 Quantum Computing 195
7.2.1 Quantum Computing: History 197
7.2.2 Quantum Computing: What? 198
7.2.3 Quantum Computing: Why? 198
7.2.4 Quantum Computing: How? 198
7.3 Quantum Principles 200
7.3.1 Bits Versus Qbits 200
7.3.2 Quantum Uncertainty 201
7.3.3 Quantum Superposition 201
7.3.4 Quantum Entanglement (Nonlocality) 202
7.4 Quantum Challenges 202
7.5 DNA Computing: From Bits to Cells 202
7.5.1 What Is DNA? 202
7.5.2 Why DNA Computing? 202
7.5.3 How DNA Works? 203
7.5.4 Disadvantages of DNA Computing 204
7.5.5 Traveling Salesman Problem Using DNA-Computing 204
7.6 Conclusions 205
References 206
Chapter 8: Cloud, Fog, and Edge Computing 207
8.1 Cloud Computing 207
8.2 Fog/Edge Computing 210
8.3 Conclusions 213
References 216
Chapter 9: Reconfigurable and Heterogeneous Computing 217
9.1 Embedded Computing 217
9.1.1 Categories of Embedded Systems Are [2–5] 217
9.1.2 Embedded System Classifications 218
9.1.3 Components of Embedded Systems 218
9.1.4 Microprocessor vs. Microcontroller 219
9.1.5 Embedded Systems Programming 220
9.1.6 DSP 220
9.2 Real-Time Computing 222
9.3 Reconfigurable Computing 223
9.3.1 FPGA 224
9.3.2 High-Level Synthesis (C/C++ to RTL) 225
9.3.3 High-Level Synthesis (Python to HDL) 226
9.3.4 MATLAB to HDL 227
9.3.5 Java to VHDL 227
9.4 Heterogeneous Computing 228
9.4.1 Heterogeneity vs. Homogeneity 229
9.4.2 Pollack’s Rule 229
9.4.3 Static vs. Dynamic Partitioning 230
9.4.4 Heterogeneous Computing Programming 230
9.4.4.1 Heterogeneous Computing Programming: OpenCL 231
9.5 Conclusions 231
References 231
Chapter 10: Conclusions 233
Index 235

Erscheint lt. Verlag 25.1.2020
Zusatzinfo XIV, 233 p. 179 illus., 148 illus. in color.
Sprache englisch
Themenwelt Mathematik / Informatik Informatik
Technik Elektrotechnik / Energietechnik
Schlagworte Approximate computing • cognitive computing • In-memory computing • More than Moore • Quantum and DNA computing
ISBN-10 3-030-37224-3 / 3030372243
ISBN-13 978-3-030-37224-8 / 9783030372248
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 11,1 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Konzepte, Methoden, Lösungen und Arbeitshilfen für die Praxis

von Ernst Tiemeyer

eBook Download (2023)
Carl Hanser Verlag GmbH & Co. KG
69,99
Konzepte, Methoden, Lösungen und Arbeitshilfen für die Praxis

von Ernst Tiemeyer

eBook Download (2023)
Carl Hanser Verlag GmbH & Co. KG
69,99
Der Weg zur professionellen Vektorgrafik

von Uwe Schöler

eBook Download (2024)
Carl Hanser Verlag GmbH & Co. KG
29,99