Applied Machine Learning - M. Gopal

Applied Machine Learning

(Autor)

Buch | Hardcover
656 Seiten
2019
McGraw-Hill Education (Verlag)
978-1-260-45684-4 (ISBN)
85,95 inkl. MwSt
Publisher's Note: Products purchased from Third Party sellers are not guaranteed by the publisher for quality, authenticity, or access to any online entitlements included with the product.
Cutting-edge machine learning principles, practices, and applications
This comprehensive textbook explores the theoretical under¬pinnings of learning and equips readers with the knowledge needed to apply powerful machine learning techniques to solve challenging real-world problems. Applied Machine Learning shows, step by step, how to conceptualize problems, accurately represent data, select and tune algorithms, interpret and analyze results, and make informed strategic decisions. Presented in a non-rigorous mathematical style, the book covers a broad array of machine learning topics with special emphasis on methods that have been profitably employed.
Coverage includes:
•Supervised learning•Statistical learning•Learning with support vector machines (SVM)•Learning with neural networks (NN)•Fuzzy inference systems•Data clustering•Data transformations•Decision tree learning•Business intelligence•Data mining•And much more

M. Gopal, is a former professor of IIT Delhi, is a globally known academician with excellent credentials as an author, teacher, and researcher. He is the author or co-author of five books on control engineering.

Dedication
Contents
Preface
Acknowledgements
1. Introduction
1.1 Towards Intelligent Machines
1.2 Well-Posed Machine Learning Problems
1.3 Examples of Applications in Diverse Fields
1.4 Data Representation
1.4.1 Time Series Forecasting
1.4.2 Datasets for Toy (Unreastically Simple) and Realistic Problems
1.5 Domain Knowledge for Productive use of Machine Learning
1.6 Diversity of Data: Structured/Unstructured
1.7 Forms of Learning
1.7.1 Supervised/Directed Learning
1.7.2 Unsupervised/Undirected Learning
1.7.3 Reinforcement Learning
1.7.4 Learning Based on Natural Processes: Evolution, Swarming, and Immune Systems
1.8 Machine Learning and Data Mining
1.9 Basic Linear Algebra in Machine Learning Techniques
1.10 Relevant Resources for Machine Learning
2. Supervised Learning: Rationale and Basics
2.1 Learning from Observations
2.2 Bias and Variance
2.3 Why Learning Works: Computational Learning Theory
2.4 Occam’s Razor Principle and Overfitting Avoidance
2.5 Heuristic Search in Inductive Learning
2.5.1 Search through Hypothesis Space
2.5.2 Ensemble Learning
2.5.3 Evaluation of a Learning System
2.6 Estimating Generalization Errors
2.6.1 Holdout Method and Random Subsampling
2.6.2 Cross-validation
2.6.3 Bootstrapping
2.7 Metrics for Assessing Regression (Numeric Prediction) Accuracy
2.7.1 Mean Square Error
2.7.2 Mean Absolute Error
2.8 Metrics for Assessing Classification (Pattern Recognition) Accuracy
2.8.1 Misclassification Error
2.8.2 Confusion Matrix
2.8.3 Comparing Classifiers Based on ROC Curves
2.9 An Overview of the Design Cycle and Issues in Machine Learning
3. Statistical Learning
3.1 Machine Learning and Inferential Statistical Analysis
3.2 Descriptive Statistics in Learning Techniques
3.2.1 Representing Uncertainties in Data: Probability Distributions
3.2.2 Descriptive Measures of Probability Distributions
3.2.3 Descriptive Measures from Data Sample
3.2.4 Normal Distributions
3.2.5 Data Similarity
3.3 Bayesian Reasoning: A Probabilistic Approach to Inference
3.3.1 Bayes Theorem
3.3.2 Naive Bayes Classifier
3.3.3 Bayesian Belief Networks
3.4 k-Nearest Neighbor (k-NN) Classifier
3.5 Discriminant Functions and Regression Functions
3.5.1 Classification and Discriminant Functions
3.5.2 Numeric Prediction and Regression Functions
3.5.3 Practical Hypothesis Functions
3.6 Linear Regression with Least Square Error Criterion
3.6.1 Minimal Sum-of-Error-Squares and the Pseudoinverse
3.6.2 Gradient Descent Optimization Schemes
3.6.3 Least Mean Square (LMS) Algorithm
3.7 Logistic Regression for Classification Tasks
3.8 Fisher’s Linear Discriminant and Thresholding for Classification
3.8.1 Fisher’s Linear Discriminant
3.8.2 Thresholding
3.9 Minimum Description Length Principle
3.9.1 Bayesian Perspective
3.9.2 Entropy and Information
4. Learning With Support Vector Machines (SVM)
4.1 Introduction
4.2 Linear Discriminant Functions for Binary Classification
4.3 Perceptron Algorithm
4.4 Linear Maximal Margin Classifier for Linearly Separable Data
4.5 Linear Soft Margin Classifier for Overlapping Classes
4.6 Kernel-Induced Feature Spaces
4.7 Nonlinear Classifier
4.8 Regression by Support Vector Machines
4.8.1 Linear Regression
4.8.2 Nonlinear Regression
4.9 Decomposing Multiclass Classification Problem Into Binary Classification Tasks
4.9.1 One-Against-All (OAA)
4.9.2 One-Against-One (OAO)
4.10 Variants of Basic SVM Techniques
5. Learning With Neural Networks (NN)
5.1 Towards Cognitive Machine
5.1.1 From Perceptrons to Deep Networks
5.2 Neuron Models
5.2.1 Biological Neuron
5.2.2 Artificial Neuron
5.2.3 Mathmatical Model
5.3 Network Architectures
5.3.1 Feedforward Networks
5.3.2 Recurrent Networks
5.4 Perceptrons
5.4.1 Limitations of Perceptron Algorithm for Linear Classification Tasks
5.4.2 Linear Classification using Regression Techniques
5.4.3 Standard Gradient Descent Optimization Scheme: Steepest Descent
5.5 Linear Neuron and the Widrow-Hoff Learning Rule
5.5.1 Stochastic Gradient Descent
5.6 The Error-Correction Delta Rule
5.6.1 Sigmoid Unit: Soft-Limiting Perceptron
5.7 Multi-Layer Perceptron (MLP) Networks and the Error-Backpropagation Algorithm
5.7.1 The Generalized Delta Rule
5.7.2 Convergence and Local Minima
5.7.3 Adding Momentum to Gradient Descent
5.7.4 Heuristic Aspects of the Error-backpropagation Algorithm
5.8 Multi-Class Discrimination with MLP Networks
5.9 Radial Basis Functions (RBF) Networks
5.9.1 Training the RBF Network
5.10 Genetic-Neural Systems
6. Fuzzy Inference Systems
6.1 Introduction
6.2 Cognitive Uncertainty and Fuzzy Rule-Base
6.3 Fuzzy Quantification of Knowledge
6.3.1 Fuzzy Logic
6.3.2 Fuzzy Sets
6.3.3 Fuzzy Set Operations
6.3.4 Fuzzy Relations
6.4 Fuzzy Rule-Base and Approximate Reasoning
6.4.1 Quantification of Rules via Fuzzy Relations
6.4.2 Fuzzification of Input
6.4.3 Inference Mechanism
6.4.4 Defuzzification of Inferred Fuzzy Set
6.5 Mamdani Model for Fuzzy Inference Systems
6.5.1 Mobile Robot Navigation Among Moving Obstacles
6.5.2 Mortgage Loan Assessment
6.6 Takagi-Sugeno Fuzzy Model
6.7 Neuro-Fuzzy Inference Systems
6.7.1 ANFIS Architecture
6.7.2 How Does an ANFIS Learn?
6.8 Gentic-Fuzzy Systems
7. Data Clustering and Data Transformations
7.1 Unsupervised Learning
7.1.1 Clustering
7.2 Engineering the Data
7.2.1 Exploratory Data Analysis: Learning about What is in the Data
7.2.2 Cluster Analysis: Finding Similarities in the Data
7.2.3 Data Transformations: Enhancing the Information Content of the Data
7.3 Overview of Basic Clustering Methods
7.3.1 Partitional Clustering
7.3.2 Hierarchical Clustering
7.3.3 Spectral Clustering
7.3.4 Clustering using Self-Organizing Maps
7.4 K-Means Clustering
7.5 Fuzzy K-Means Clustering
7.6 Expectation-Maximization (EM) Algorithm and Gaussian Mixtures Clustering
7.6.1 EM Algorithm
7.6.2 Gaussian Mixture Models
7.7 Some Useful Data Transformations
7.7.1 Data Cleansing
7.7.2 Derived Attributes
7.7.3 Discretizing Numeric Attributes
7.7.4 Attribute Reduction Techniques
7.8 Entropy-Based Method for Attribute Discretization
7.9 Principal Components Analysis (PCA) for Attribute Reduction
7.10 Rough Sets-Based Methods for Attribute Reduction
7.10.1 Rough Set Preliminaries
7.10.2 Analysis of Relevance of Attributes
7.10.3 Reduction of Attributes
8. Decision Tree Learning
8.1 Introduction
8.2 Example of a Classification Decision Tree
8.3 Measures of Impurity for Evaluating Splits in Decision Trees
8.3.1 Information Gain/Entropy reduction
8.3.2 Gain Ratio
8.3.3 Gini Index
8.4 ID3, C4.5, and CART Decision Trees
8.5 Pruning the Tree
8.6 Strengths and Weaknesses of Decision-Tree Approach
8.7 Fuzzy Decision Trees
9. Business Intelligence and Data Mining: Techniques and Applications
9.1 An Introduction to Analytics
9.1.1 Machine Learning, Data Mining, and Predictive Analytics
9.1.2 Basic Analytics Techniques
9.2 The CRISP-DM (Cross Industry Standard Process for Data Mining) Model
9.3 Data Warehousing and Online Analytical Processing
9.3.1 Basic Concepts
9.3.2 Databases
9.3.3 Data Warehousing: A General Architecture, and OLAP Operations
9.3.4 Data Mining in the Data Warehouse Environment
9.4 Mining Frequent Patterns and Association Rules
9.4.1 Basic Concepts
9.4.2 Measures of Strength of Frequent Patterns and Association Rules
9.4.3 Frequent Item Set Mining Methods
9.4.4 Generating Association Rules from Frequent Itemsets
9.5 Intelligent Information Retrieval Systems
9.5.1 Text Retrieval
9.5.2 Image Retrieval
9.5.3 Audio Retrieval
9.6 Applications and Trends
9.6.1 Data Mining Applications
9.6.2 Data Mining Trends
9.7 Technologies for Big Data
9.7.1 Emerging Analytic Methods
9.7.2 Emerging Technologies for Higher Levels of Scalability
Appendix A Genetic Algorithm (GA) For Search Optimization
A.1 A Simple Overview of Genetics
A.2 Genetics on Computers
A.3 The Basic Genetic Algorithm
A.4 Beyond the Basic Genetic Algorithm
Appendix B Reinforcement Learning (RL)
B.1 Introduction
B.2 Elements of Reinforcement Learning
B.3 Basics of Dynamic Programming
B.3.1 Finding Optimal Policies
B.3.2 Value Iteration
B.3.3 Policy Iteration
B.4 Temporal Difference Learning
B.4.1 Q-learning
B.4.2 Generalization
B.4.3 Sarsa-learning
Datasets from Real-Life Applications for Machine Learning Experiments
Problems
References
Index

Erscheinungsdatum
Zusatzinfo Illustrations, unspecified
Verlagsort OH
Sprache englisch
Maße 203 x 262 mm
Gewicht 1492 g
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Technik Elektrotechnik / Energietechnik
ISBN-10 1-260-45684-6 / 1260456846
ISBN-13 978-1-260-45684-4 / 9781260456844
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
von absurd bis tödlich: Die Tücken der künstlichen Intelligenz

von Katharina Zweig

Buch | Softcover (2023)
Heyne (Verlag)
20,00