Reliable Reasoning
Induction and Statistical Learning Theory
Seiten
2007
Bradford Books (Verlag)
978-0-262-08360-7 (ISBN)
Bradford Books (Verlag)
978-0-262-08360-7 (ISBN)
- Keine Verlagsinformationen verfügbar
- Artikel merken
The implications for philosophy and cognitive science of developments in statistical learning theory.
In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni -- a philosopher and an engineer -- argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors -- a central topic in SLT.
After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT.
In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni -- a philosopher and an engineer -- argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors -- a central topic in SLT.
After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT.
Gilbert Harman is Stuart Professor of Philosophy at Princeton University and the author of Explaining Value and Other Essays in Moral Philosophy and Reasoning, Meaning, and Mind. Sanjeev Kulkarni is Professor of Electrical Engineering and an associated faculty member of the Department of Philosophy at Princeton University with many publications in statistical learning theory.
Reihe/Serie | Jean Nicod Lectures |
---|---|
Verlagsort | Massachusetts |
Sprache | englisch |
Themenwelt | Geisteswissenschaften ► Philosophie |
Geisteswissenschaften ► Psychologie ► Allgemeine Psychologie | |
Geisteswissenschaften ► Psychologie ► Pädagogische Psychologie | |
ISBN-10 | 0-262-08360-4 / 0262083604 |
ISBN-13 | 978-0-262-08360-7 / 9780262083607 |
Zustand | Neuware |
Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
Techniken der Verhaltenstherapie
Buch (2024)
Julius Beltz GmbH & Co. KG (Verlag)
35,00 €