The Elements of Statistical Learning

Data Mining, Inference, and Prediction, Second Edition
Buch | Hardcover
745 Seiten
2009 | 2nd ed. 2009
Springer-Verlag New York Inc.
978-0-387-84857-0 (ISBN)

Lese- und Medienproben

The Elements of Statistical Learning - Trevor Hastie, Robert Tibshirani, Jerome Friedman
80,24 inkl. MwSt
This major new edition features many topics not covered in the original, including graphical models, random forests, and ensemble methods. As before, it covers the conceptual framework for statistical data in our rapidly expanding computerized world.
This book describes the important ideas in a variety of fields such as medicine, biology, finance, and marketing in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of colour graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book.

This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorisation, and spectral clustering. There is also a chapter on methods for "wide'' data (p bigger than n), including multiple testing and false discovery rates.

Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.

Overview of Supervised Learning.- Linear Methods for Regression.- Linear Methods for Classification.- Basis Expansions and Regularization.- Kernel Smoothing Methods.- Model Assessment and Selection.- Model Inference and Averaging.- Additive Models, Trees, and Related Methods.- Boosting and Additive Trees.- Neural Networks.- Support Vector Machines and Flexible Discriminants.- Prototype Methods and Nearest-Neighbors.- Unsupervised Learning.- Random Forests.- Ensemble Learning.- Undirected Graphical Models.- High-Dimensional Problems: p ? N.

Erscheint lt. Verlag 21.4.2017
Reihe/Serie Springer Series in Statistics
Zusatzinfo 604 Illustrations, color; 54 Illustrations, black and white; XXII, 745 p. 658 illus., 604 illus. in color.
Verlagsort New York, NY
Sprache englisch
Maße 155 x 235 mm
Themenwelt Informatik Datenbanken Data Warehouse / Data Mining
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Mathematik / Informatik Mathematik Statistik
Mathematik / Informatik Mathematik Wahrscheinlichkeit / Kombinatorik
Naturwissenschaften Biologie
ISBN-10 0-387-84857-6 / 0387848576
ISBN-13 978-0-387-84857-0 / 9780387848570
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich