Deep Learning -  Meenu Ajith,  Aswathy Rajendra Kurup,  Manel Martinez-Ramon

Deep Learning (eBook)

A Practical Introduction
eBook Download: EPUB
2024 | 1. Auflage
416 Seiten
Wiley (Verlag)
978-1-119-86188-1 (ISBN)
Systemvoraussetzungen
76,99 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

An engaging and accessible introduction to deep learning perfect for students and professionals

In Deep Learning: A Practical Introduction, a team of distinguished researchers delivers a book complete with coverage of the theoretical and practical elements of deep learning. The book includes extensive examples, end-of-chapter exercises, homework, exam material, and a GitHub repository containing code and data for all provided examples.

Combining contemporary deep learning theory with state-of-the-art tools, the chapters are structured to maximize accessibility for both beginning and intermediate students. The authors have included coverage of TensorFlow, Keras, and Pytorch. Readers will also find:

  • Thorough introductions to deep learning and deep learning tools
  • Comprehensive explorations of convolutional neural networks, including discussions of their elements, operation, training, and architectures
  • Practical discussions of recurrent neural networks and non-supervised approaches to deep learning
  • Fulsome treatments of generative adversarial networks as well as deep Bayesian neural networks

Perfect for undergraduate and graduate students studying computer vision, computer science, artificial intelligence, and neural networks, Deep Learning: A Practical Introduction will also benefit practitioners and researchers in the fields of deep learning and machine learning in general.

Manel Martínez-Ramón, PhD, is King Felipe VI Endowed Chair and Professor in the Department of Electrical and Computer Engineering at the University of New Mexico in the United States. He earned his doctorate in Telecommunication Technologies at the Universidad Carlos III de Madrid in 1999.

Meenu Ajith, PhD, is a Postdoctoral Research Associate in Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) at Georgia State University, Georgia Institute of Technology, and Emory University. She earned her doctorate degree in Electrical Engineering from the University of New Mexico in 2022. Her research interests include machine learning, computer vision, medical imaging, and image processing.

Aswathy Rajendra Kurup, PhD, is a Data Scientist at Intel Corporation. She earned her doctorate degree in Electrical Engineering from the University of Mexico in 2022. Her research interests include image processing, signal processing, deep learning, computer vision, data analysis and data processing.


An engaging and accessible introduction to deep learning perfect for students and professionals In Deep Learning: A Practical Introduction, a team of distinguished researchers delivers a book complete with coverage of the theoretical and practical elements of deep learning. The book includes extensive examples, end-of-chapter exercises, homework, exam material, and a GitHub repository containing code and data for all provided examples. Combining contemporary deep learning theory with state-of-the-art tools, the chapters are structured to maximize accessibility for both beginning and intermediate students. The authors have included coverage of TensorFlow, Keras, and Pytorch. Readers will also find: Thorough introductions to deep learning and deep learning toolsComprehensive explorations of convolutional neural networks, including discussions of their elements, operation, training, and architecturesPractical discussions of recurrent neural networks and non-supervised approaches to deep learningFulsome treatments of generative adversarial networks as well as deep Bayesian neural networks Perfect for undergraduate and graduate students studying computer vision, computer science, artificial intelligence, and neural networks, Deep Learning: A Practical Introduction will also benefit practitioners and researchers in the fields of deep learning and machine learning in general.

1
The Multilayer Perceptron


1.1 Introduction


The concept of artificial intelligence (AI) is relatively simple to explain, and it can be enunciated as a possible answer to the question of how to make a machine that is able to perform a given task without being explicitly programmed for it, but instead, extracting the necessary information from a set of data. Let us say, for example, that a machine is needed to classify green and red apples. The machine is provided with a camera, and all the mechanisms necessary to place one apple at a time in front of it and then throw it in one of two buckets. A machine wired to do this will relay in binary operators as “IF,” “THEN.” If the color is red, throw it in bucket A, otherwise, in bucket “B.”

The limitations of this method are obvious. If a pear is mistakenly introduced in the process, it will be classified as a green apple. Also, how can we use the same or similar structure for a different or more complex task? As in the previous machine, an AI approach uses features found in the data in order to take the decision, but the algorithm is not explicitly programmed. Instead, the machine has a specific parametric structure capable of learning from data. The learning process involves the optimization of a certain measurable criterion with respect to the parameters. The deep learning (DL) structures for artificial intelligence are able to learn complex tasks from the available data, but they also have capabilities such as learning how to extract the useful features for the task at hand, provide probabilistic outputs (i.e. “the probability of apple is 97%”), and many others. The basic element of such a structure in DL is the so‐called artificial neuron, a simple concept that provides the power and nonlinear properties.

This chapter is intended to be a first contact with DL, where we introduce the most basic type of feedforward neural network (FFNN), which is called the multilayer perceptron (MLP). Here, we first introduce the low‐level basic elements of most neural network (NN)s, then the structure and learning criteria.

The elements introduced in this chapter will be used throughout the book. We start from the single perceptron, we construct a basic MLP, where the different activations are developed, and then the notation based on tensors is also justified as a generalized tool to be used throughout the book. After this, we present the maximum likelihood (ML) criterion as a general criterion, which is then particularized to the classic cases corresponding to the different output activations. Finally, the backpropagation (BP) is detailed and then summarized so that can be translated into a computer program.

In this chapter, examples and exercises are presented in a way that assumes that the student does not necessarily know about programming in Python. Examples will be focused on the behavior of the MLP, without focusing on the programming, and the exercises intended to modify, at a high‐level data, parameters, and structures in order to answer questions to different practical cases. Chapter 3 explains, in particular, how the different examples have been coded, thus they will be reviewed in that chapter from the point of view of practical programming.

1.2 The Concept of Neuron


The idea of the artificial neural network (ANN) is obviously inspired by the structure of the nervous system. The first attempt to understand how neural tissue works from a logical perspective was published in 1943 by Warren S. McCulloch and Walter Pitts (1943) (Fig. 1.1). They proposed the first mathematical model for a biological neuron in his paper. In this model, the neuron has two possible states, defined as 0 or 1 depending on whether the neuron is resting or it has been activated or fired. This represents the axon of the neuron. The input of this neuron model consists of a number of dendrites whose excitation is also binary. This elemental structure is completed with an inhibitory input. If this input is activated, the neuron cannot fire. If the inhibitory input is deactivated, the neuron can be activated if the combination of inputs is larger than a given threshold. This model is fully binary and, since it includes mathematical functions that cannot be differentiated, it cannot be treated mathematically in an easy way. Certain modifications that will be seen further give rise to what is known as the artificial neuron in use today.

Figure 1.1 Warren S. McCulloch (left) and Walter Pitts in 1949.

Source: R. Moreno‐Díaz and A. Moreno‐Díaz (2007)/with permission from Elsevier.

Section 1.2.1 contains an introduction to the concept of artificial perceptron from an algebraic point of view. A possible way to train a single perceptron is introduced in Sections 1.2.2 and 1.2.3, as well as the limitations of this structure as a linear classifier.

The concept of artificial NN was introduced by the psychologist Frank Rosenblatt (Fig. 1.2) in 1958 Rosenblatt (1957, 1958). In this paper, he proposed a structure of the visual cortex perceptron (Fig. 1.3). The structure presented in Rosenblatt (1958) contained the fundamental idea that is used in any artificial learning structure. In the first stage (Retina), the device collects the available observation or input pattern intended to be processed in order to extract knowledge of it. The second stage (Projection area) is in charge of processing this observation to extract the information needed for the task at hand. This information is commonly called the set of features of the input pattern. The third stage (Association area) is intended to process these features to map them into a given response. For example, the response may be to recognize some given object classes present in the scene. Rosenblatt is the father of the artificial perceptron. He proved that by modifying the McCulloch–Pitts neuron model, the neuron could actually learn tasks from the data. In particular, his model had weights that multiplied each of the inputs to the neuron as well as the input bias or threshold that could be adjusted for the neuron to perform a given task. He developed the Mark 1 perceptron machine, which was the first implementation of his perceptron algorithm. This device was not a computer but an electromechanical learning machine. The machine consisted of a camera constructed with an array of 400 photocells, the output of each one connected randomly to the dendrites of a set of neurons. The weights, or attenuations applied to these inputs, were controlled with potentiometers whose axes were connected to electric motors. During the learning procedure, the motors adjusted the input weights. This machine was able to distinguish linearly separable patterns, or patterns that were at one or another side of a hyperplane in the space of 400 dimensions spanned by the camera inputs depending on its binary class. The invention was then limited in its capabilities until it was proven that a perceptron constructed with more than one layer of neurons MLP had nonlinear capabilities, that is, the ability to separate patterns that could not be separated by a hyperplane. Nevertheless, the MLP could not be trained using the techniques introduced by Rosenblatt for his perceptron. It was in 1971 that Paul Werbos, in his PhD thesis (P. J. Werbos 1974) introduced the BP algorithm, which made it possible to adjust the weights of a multilayer perceptron.

Figure 1.2 Frank Rosenblatt.

Source: https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon/ last accessed November 30, 2023.

Figure 1.3 The perceptron as described in Rosenblatt (1958)/American Psychological Association.

1.2.1 The Perceptron


From a conceptual point of view, a perceptron is a function made to perform a binary classification. In order to describe this function, let us first introduce the necessary notation and concepts associated with it. Assume a given observation that consists of a collection of magnitudes observed from a physical phenomenon. These magnitudes are stored in a column vector, which will be called , which lies in a space of dimensions. For illustrative purposes, let us construct a set of artificial data in a space of dimensions as in Fig. 1.4.

The figure shows a set of points with coordinates , where operator denotes the transpose operation, meaning that the vector is a column one even if it is written as a row vector. In this toy example, the data belongs to one of two classes (black or white) that we will label arbitrarily with the labels , though in some cases, labels are more convenient. It can be seen that the data is linearly separable, that is, both classes can be separated by placing a line between the black and white clusters of data. That is, roughly speaking, the idea of the perceptron. It must be trained to place a separating hyperplane between both classes. We define the hyperplane (particularized to a line in the two‐dimensional example) as

Figure 1.4 A set of observations in a space of two dimensions.

...

Erscheint lt. Verlag 8.7.2024
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Netzwerke
Technik Elektrotechnik / Energietechnik
ISBN-10 1-119-86188-8 / 1119861888
ISBN-13 978-1-119-86188-1 / 9781119861881
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)
Größe: 30,0 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Das umfassende Handbuch

von Martin Linten; Axel Schemberg; Kai Surendorf

eBook Download (2023)
Rheinwerk Computing (Verlag)
20,93
das Praxisbuch für Administratoren und DevOps-Teams

von Michael Kofler

eBook Download (2023)
Rheinwerk Computing (Verlag)
27,93