Machine Learning Safety - Xiaowei Huang, Gaojie Jin, Wenjie Ruan

Machine Learning Safety

Buch | Hardcover
321 Seiten
2023
Springer Verlag, Singapore
978-981-19-6813-6 (ISBN)
74,89 inkl. MwSt
Machine learning algorithms allow computers to learn without being explicitly programmed. and adversarial training, which is used to enhance the training process and reduce vulnerabilities. The book aims to improve readers’ awareness of the potential safety issues regarding machine learning models.
Machine learning algorithms allow computers to learn without being explicitly programmed. Their application is now spreading to highly sophisticated tasks across multiple domains, such as medical diagnostics or fully autonomous vehicles. While this development holds great potential, it also raises new safety concerns, as machine learning has many specificities that make its behaviour prediction and assessment very different from that for explicitly programmed software systems. This book addresses the main safety concerns with regard to machine learning, including its susceptibility to environmental noise and adversarial attacks. Such vulnerabilities have become a major roadblock to the deployment of machine learning in safety-critical applications. The book presents up-to-date techniques for adversarial attacks, which are used to assess the vulnerabilities of machine learning models; formal verification, which is used to determine if a trained machine learning model is free of vulnerabilities; and adversarial training, which is used to enhance the training process and reduce vulnerabilities. The book aims to improve readers’ awareness of the potential safety issues regarding machine learning models. In addition, it includes up-to-date techniques for dealing with these issues, equipping readers with not only technical knowledge but also hands-on practical skills.

Xiaowei Huang is currently a Reader of Computer Science and Director of the Autonomous Cyber-Physics Systems lab at the University of Liverpool (UoL). His research is concerned with the development of automated verification techniques that ensure the correctness and reliability of intelligent systems. He has published more than 80 papers, primarily in leading conference proceedings and journals in the fields of Artificial Intelligence (e.g. Artificial Intelligence Journal, ACM Transactions on Computational Logics, NeurIPS, AAAI, IJCAI, ECCV), Formal Verification (e.g. CAV, TACAS, and Theoretical Computer Science) and Software Engineering (e.g. IEEE Transactions on Reliability, ICSE and ASE). He has been invited to give talks at several leading conferences, discussing topics related to the safety and security of applying machine learning algorithms to critical applications. He has co-chaired the AAAI and IJCAI workshop series on Artificial Intelligence Safety and been the PI or co-PI ofseveral Dstl (Ministry of Defence, UK), EPSRC and EU H2020 projects. He is the Director of the Autonomous Cyber Physical Systems Lab at Liverpool. Wenjie Ruan is a Senior Lecturer of Data Science at the University of Exeter, UK. His research interests lie in the adversarial robustness of deep neural networks, and in machine learning and its applications in safety-critical systems, including health data analytics and human-centered computing. His series of research works on Device-free Human Localization and Activity Recognition for Supporting the Independent Living of the Elderly garnered him a Doctoral Thesis Excellence Award from the University of Adelaide, Best Research Poster Award at the 9th ACM International Workshop on IoT and Cloud Computing, and Best Student Paper Award at the 14th International Conference on Advanced Data Mining and Applications. He was also the recipient of a prestigious DECRA fellowship from the Australian Research Council. Dr. Ruan has published more than 40 papers in international conference proceedings such as AAAI, IJCAI, SIGIR, WWW, ICDM, UbiComp, CIKM, and ASE. Dr. Ruan has served as a senior PC, PC member or invited reviewer for over 10 international conferences, including IJCAI, AAAI, ICML, NeurIPS, CVPR, ICCV, AAMAS, ECML-PKDD, etc. He is the Director of the Exeter Trustworthy AI Lab at the University of Exeter. 

1. Introduction.- 2. Safety of Simple Machine Learning Models.- 3. Safety of Deep Learning.- 4. Robustness Verification of Deep Learning.- 5. Enhancement to Robustness and Generalization.- 6. Probabilistic Graph Model.- A. Mathematical Foundations.- B. Competitions.

Erscheinungsdatum
Reihe/Serie Artificial Intelligence: Foundations, Theory, and Algorithms
Zusatzinfo 1 Illustrations, black and white; XVII, 321 p. 1 illus.
Verlagsort Singapore
Sprache englisch
Maße 155 x 235 mm
Themenwelt Informatik Netzwerke Sicherheit / Firewall
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Schlagworte Deep learning • machine learning • Reliability • Robustness • Safety
ISBN-10 981-19-6813-6 / 9811968136
ISBN-13 978-981-19-6813-6 / 9789811968136
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Das Lehrbuch für Konzepte, Prinzipien, Mechanismen, Architekturen und …

von Norbert Pohlmann

Buch | Softcover (2022)
Springer Vieweg (Verlag)
34,99
Management der Informationssicherheit und Vorbereitung auf die …

von Michael Brenner; Nils gentschen Felde; Wolfgang Hommel

Buch (2024)
Carl Hanser (Verlag)
69,99

von Chaos Computer Club

Buch | Softcover (2024)
KATAPULT Verlag
28,00