Interperetable AI
Seiten
2022
Manning Publications (Verlag)
978-1-61729-764-9 (ISBN)
Manning Publications (Verlag)
978-1-61729-764-9 (ISBN)
AI models can become so complex that even experts have difficulty understanding them—and forget about explaining the nuances of a cluster of novel algorithms to a business stakeholder! InterpretableAI is filled with cutting-edge techniques that will improve your understanding of how your AI models function.
InterpretableAI is a hands-on guide to interpretability techniques that open up the black box of AI. This practical guide simplifies cutting edge research into transparent and explainable AI, delivering practical methods you can easily implement with Python and opensource libraries. With examples from all major machine learning approaches, this book demonstrates why some approaches to AI are so opaque, teaches you toidentify the patterns your model has learned, and presents best practices for building fair and unbiased models.
How deep learning models produce their results is often a complete mystery, even to their creators. These AI"black boxes" can hide unknown issues—including data leakage, the replication of human bias, and difficulties complying with legal requirements such as the EU's "right to explanation." State-of-the-art interpretability techniques have been developed to understand even the most complex deep learning models, allowing humans to follow an AI's methods and to better detect when it has made a mistake.
InterpretableAI is a hands-on guide to interpretability techniques that open up the black box of AI. This practical guide simplifies cutting edge research into transparent and explainable AI, delivering practical methods you can easily implement with Python and opensource libraries. With examples from all major machine learning approaches, this book demonstrates why some approaches to AI are so opaque, teaches you toidentify the patterns your model has learned, and presents best practices for building fair and unbiased models.
How deep learning models produce their results is often a complete mystery, even to their creators. These AI"black boxes" can hide unknown issues—including data leakage, the replication of human bias, and difficulties complying with legal requirements such as the EU's "right to explanation." State-of-the-art interpretability techniques have been developed to understand even the most complex deep learning models, allowing humans to follow an AI's methods and to better detect when it has made a mistake.
Ajay Thampi is a machine learning engineer at a large tech company primarily focused on responsible AI and fairness. He holds a PhD and his research was focused on signal processing and machine learning. He has published papers at leading conferences and journals on reinforcement learning, convex optimization, and classical machine learning techniques applied to 5G cellular networks.
Erscheinungsdatum | 07.07.2022 |
---|---|
Verlagsort | New York |
Sprache | englisch |
Maße | 189 x 235 mm |
Gewicht | 609 g |
Themenwelt | Informatik ► Netzwerke ► Sicherheit / Firewall |
Mathematik / Informatik ► Informatik ► Software Entwicklung | |
Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik | |
Recht / Steuern ► Privatrecht / Bürgerliches Recht ► IT-Recht | |
ISBN-10 | 1-61729-764-X / 161729764X |
ISBN-13 | 978-1-61729-764-9 / 9781617297649 |
Zustand | Neuware |
Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
Das Lehrbuch für Konzepte, Prinzipien, Mechanismen, Architekturen und …
Buch | Softcover (2022)
Springer Vieweg (Verlag)
34,99 €
Management der Informationssicherheit und Vorbereitung auf die …
Buch (2024)
Carl Hanser (Verlag)
69,99 €