Learning in Graphical Models -

Learning in Graphical Models

M. Jordan (Herausgeber)

Buch | Hardcover
630 Seiten
1998
Springer (Verlag)
978-0-7923-5017-0 (ISBN)
320,99 inkl. MwSt
In the past decade, a number of different research communities within the computational sciences have studied learning in networks, starting from a number of different points of view. There has been substantial progress in these different communities and surprising convergence has developed between the formalisms. The awareness of this convergence and the growing interest of researchers in understanding the essential unity of the subject underlies the current volume.
Two research communities which have used graphical or network formalisms to particular advantage are the belief network community and the neural network community. Belief networks arose within computer science and statistics and were developed with an emphasis on prior knowledge and exact probabilistic calculations. Neural networks arose within electrical engineering, physics and neuroscience and have emphasised pattern recognition and systems modelling problems. This volume draws together researchers from these two communities and presents both kinds of networks as instances of a general unified graphical formalism. The book focuses on probabilistic methods for learning and inference in graphical models, algorithm analysis and design, theory and applications. Exact methods, sampling methods and variational methods are discussed in detail.
Audience: A wide cross-section of computationally oriented researchers, including computer scientists, statisticians, electrical engineers, physicists and neuroscientists.

Preface.- I: Inference.- to Inference for Bayesian Networks.- Advanced Inference in Bayesian Networks.- Inference in Bayesian Networks using Nested Junction Trees.- Bucket Elimination: A Unifying Framework for Probabilistic Inference.- An Introduction to Variational Methods for Graphical Models.- Improving the Mean Field Approximation via the Use of Mixture Distributions.- to Monte Carlo Methods.- Suppressing Random Walks in Markov Chain Monte Carlo using Ordered Overrelaxation.- II: Independence.- Chain Graphs and Symmetric Associations.- The Multiinformation Function as a Tool for Measuring Stochastic Dependence.- III: Foundations for Learning.- A Tutorial on Learning with Bayesian Networks.- A View of the EM Algorithm that Justifies Incremental, Sparse, and Other Variants.- IV: Learning from Data.- Latent Variable Models.- Stochastic Algorithms for Exploratory Data Analysis: Data Clustering and Data Visualization.- Learning Bayesian Networks with Local Structure.- Asymptotic Model Selection for Directed Networks with Hidden Variables.- A Hierarchical Community of Experts.- An Information-Theoretic Analysis of Hard and Soft Assignment Methods for Clustering.- Learning Hybrid Bayesian Networks from Data.- A Mean Field Learning Algorithm for Unsupervised Neural Networks.- Edge Exclusion Tests for Graphical Gaussian Models.- Hepatitis B: A Case Study in MCMC.- Prediction with Gaussian Processes: From Linear Regression to Linear Prediction and Beyond.

Erscheint lt. Verlag 31.3.1998
Reihe/Serie Nato Science Series D ; 89
Zusatzinfo XI, 630 p.
Verlagsort Dordrecht
Sprache englisch
Maße 155 x 235 mm
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
ISBN-10 0-7923-5017-0 / 0792350170
ISBN-13 978-0-7923-5017-0 / 9780792350170
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
von absurd bis tödlich: Die Tücken der künstlichen Intelligenz

von Katharina Zweig

Buch | Softcover (2023)
Heyne (Verlag)
20,00
dem Menschen überlegen – wie KI uns rettet und bedroht

von Manfred Spitzer

Buch | Hardcover (2023)
Droemer (Verlag)
24,00