Für diesen Artikel ist leider kein Bild verfügbar.

Space-Time Computing with Temporal Neural Networks

(Autor)

Buch | Softcover
215 Seiten
2017
Morgan and Claypool Life Sciences (Verlag)
978-1-62705-948-0 (ISBN)
108,45 inkl. MwSt
Written from the perspective of a computer designer and targeted at computer researchers, this volume is intended to give both background and lay out a course of action for studying the brain's computational paradigm. It contains a mix of concepts and ideas drawn from computational neuroscience, combined with those of the author.
Understanding and implementing the brain's computational paradigm is the one true grand challenge facing computer researchers. Not only are the brain's computational capabilities far beyond those of conventional computers, its energy efficiency is truly remarkable. This book, written from the perspective of a computer designer and targeted at computer researchers, is intended to give both background and lay out a course of action for studying the brain's computational paradigm. It contains a mix of concepts and ideas drawn from computational neuroscience, combined with those of the author.

As background, relevant biological features are described in terms of their computational and communication properties. The brain's neocortex is constructed of massively interconnected neurons that compute and communicate via voltage spikes, and a strong argument can be made that precise spike timing is an essential element of the paradigm. Drawing from the biological features, a mathematics-based computational paradigm is constructed. The key feature is spiking neurons that perform communication and processing in space-time, with emphasis on time. In these paradigms, time is used as a freely available resource for both communication and computation.

Neuron models are first discussed in general, and one is chosen for detailed development. Using the model, single-neuron computation is first explored. Neuron inputs are encoded as spike patterns, and the neuron is trained to identify input pattern similarities. Individual neurons are building blocks for constructing larger ensembles, referred to as "columns". These columns are trained in an unsupervised manner and operate collectively to perform the basic cognitive function of pattern clustering. Similar input patterns are mapped to a much smaller set of similar output patterns, thereby dividing the input patterns into identifiable clusters. Larger cognitive systems are formed by combining columns into a hierarchical architecture. These higher level architectures are the subject of ongoing study, and progress to date is described in detail in later chapters. Simulation plays a major role in model development, and the simulation infrastructure developed by the author is described.

James E. Smith is Professor Emeritus in the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison. He received his Ph.D. from the University of Illinois in 1976. He then joined the faculty of the University of Wisconsin-Madison, teaching and conducting research-first in fault-tolerant computing, then in computer architecture. In 1979, he took a leave of absence to work for the Control Data Corporation in Arden Hills, MN, participating in the design of the CYBER 180/990. While at Control Data and after returning to the University of Wisconsin in 1981, he studied several aspects of high performance pipelined processors. This work included the development of dynamic history-based branch predictors, instruction issuing methods, and techniques for providing precise interrupts that are widely used today. From 1984-1989, he was principal architect and a logic designer for the ACA ZS-1, a scientific computer employing a dynamically scheduled, superscalar processor architecture. In 1989, Dr. Smith joined Cray Research and headed a small research team that participated in the development and analysis of future supercomputer architectures. This work focused on advanced vector processor implementations, high bandwidth memory systems, and interconnection networks. In 1994, he re-joined the Department of ECE at the University of Wisconsin. His research interests were directed at new paradigms for exploiting instruction level parallelism. The virtual machine abstraction was used as a technique for providing high performance through co-design and tight coupling of hardware and software. In 2007, he retired from Wisconsin, and then conducted research in industry for four years, first at Google, then at Intel. He received the 1999 ACM/IEEE Eckert-Mauchly Award for contributions to computer architecture. Currently, he is studying new neuron-based computing paradigms at home along the Clark Fork near Missoula, Montana. Princeton University

Preface
Acknowledgments
Introduction
Space-Time Computing
Biological Overview
Connecting TNNs with Biology
Neuron Modeling
Computing with Excitatory Neurons
System Architecture
Simulator Implementation
Clustering the MNIST Dataset
Summary and Conclusions
References
Author Biography

Erscheinungsdatum
Reihe/Serie Synthesis Lectures on Computer Architecture
Mitarbeit Herausgeber (Serie): Margaret Martonosi
Verlagsort San Rafael, CA
Sprache englisch
Maße 191 x 235 mm
Gewicht 460 g
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Technik Elektrotechnik / Energietechnik
ISBN-10 1-62705-948-2 / 1627059482
ISBN-13 978-1-62705-948-0 / 9781627059480
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
von absurd bis tödlich: Die Tücken der künstlichen Intelligenz

von Katharina Zweig

Buch | Softcover (2023)
Heyne (Verlag)
20,00