Mathematical Aspects of Deep Learning -

Mathematical Aspects of Deep Learning

Philipp Grohs, Gitta Kutyniok (Herausgeber)

Buch | Hardcover
492 Seiten
2022
Cambridge University Press (Verlag)
978-1-316-51678-2 (ISBN)
87,25 inkl. MwSt
The development of a theoretical foundation for deep learning methods constitutes one of the most active and exciting research topics in applied mathematics. Written by leading experts in the field, this book acts as a mathematical introduction to deep learning for researchers and graduate students trying to get into the field.
In recent years the development of new classification and regression algorithms based on deep learning has led to a revolution in the fields of artificial intelligence, machine learning, and data analysis. The development of a theoretical foundation to guarantee the success of these algorithms constitutes one of the most active and exciting research topics in applied mathematics. This book presents the current mathematical understanding of deep learning methods from the point of view of the leading experts in the field. It serves both as a starting point for researchers and graduate students in computer science, mathematics, and statistics trying to get into the field and as an invaluable reference for future research.

Philipp Grohs is Professor of Applied Mathematics at the University of Vienna and Group Leader of Mathematical Data Science at the Austrian Academy of Sciences. Gitta Kutyniok is Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at Ludwig-Maximilians Universität München and Adjunct Professor for Machine Learning at the University of Tromsø.

1. The modern mathematics of deep learning Julius Berner, Philipp Grohs, Gitta Kutyniok and Philipp Petersen; 2. Generalization in deep learning Kenji Kawaguchi, Leslie Pack Kaelbling, and Yoshua Bengio; 3. Expressivity of deep neural networks Ingo Gühring, Mones Raslan and Gitta Kutyniok; 4. Optimization landscape of neural networks René Vidal, Zhihui Zhu and Benjamin D. Haeffele; 5. Explaining the decisions of convolutional and recurrent neural networks Wojciech Samek, Leila Arras, Ahmed Osman, Grégoire Montavon and Klaus-Robert Müller; 6. Stochastic feedforward neural networks: universal approximation Thomas Merkh and Guido Montúfar; 7. Deep learning as sparsity enforcing algorithms A. Aberdam and J. Sulam; 8. The scattering transform Joan Bruna; 9. Deep generative models and inverse problems Alexandros G. Dimakis; 10. A dynamical systems and optimal control approach to deep learning Weinan E, Jiequn Han and Qianxiao Li; 11. Bridging many-body quantum physics and deep learning via tensor networks Yoav Levine, Or Sharir, Nadav Cohen and Amnon Shashua.

Erscheinungsdatum
Zusatzinfo Worked examples or Exercises
Verlagsort Cambridge
Sprache englisch
Maße 174 x 251 mm
Gewicht 1070 g
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Mathematik / Informatik Mathematik Analysis
ISBN-10 1-316-51678-4 / 1316516784
ISBN-13 978-1-316-51678-2 / 9781316516782
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Eine kurze Geschichte der Informationsnetzwerke von der Steinzeit bis …

von Yuval Noah Harari

Buch | Hardcover (2024)
Penguin (Verlag)
28,00