Statistical Reinforcement Learning
Chapman & Hall/CRC (Verlag)
978-1-4398-5689-5 (ISBN)
Supplying an up-to-date and accessible introduction to the field, Statistical Reinforcement Learning: Modern Machine Learning Approaches presents fundamental concepts and practical algorithms of statistical reinforcement learning from the modern machine learning viewpoint. It covers various types of RL approaches, including model-based and model-free approaches, policy iteration, and policy search methods.
Covers the range of reinforcement learning algorithms from a modern perspective
Lays out the associated optimization problems for each reinforcement learning scenario covered
Provides thought-provoking statistical treatment of reinforcement learning algorithms
The book covers approaches recently introduced in the data mining and machine learning fields to provide a systematic bridge between RL and data mining/machine learning researchers. It presents state-of-the-art results, including dimensionality reduction in RL and risk-sensitive RL. Numerous illustrative examples are included to help readers understand the intuition and usefulness of reinforcement learning techniques.
This book is an ideal resource for graduate-level students in computer science and applied statistics programs, as well as researchers and engineers in related fields.
Masashi Sugiyama received his bachelor, master, and doctor of engineering degrees in computer science from the Tokyo Institute of Technology, Japan. In 2001 he was appointed assistant professor at the Tokyo Institute of Technology and he was promoted to associate professor in 2003. He moved to the University of Tokyo as professor in 2014. He received an Alexander von Humboldt Foundation Research Fellowship and researched at Fraunhofer Institute, Berlin, Germany, from 2003 to 2004. In 2006, he received a European Commission Program Erasmus Mundus Scholarship and researched at the University of Edinburgh, Scotland. He received the Faculty Award from IBM in 2007 for his contribution to machine learning under non-stationarity, the Nagao Special Researcher Award from the Information Processing Society of Japan in 2011, and the Young Scientists’ Prize from the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology for his contribution to the density-ratio paradigm of machine learning. His research interests include theories and algorithms of machine learning and data mining, and a wide range of applications such as signal processing, image processing, and robot control. He published Density Ratio Estimation in Machine Learning (Cambridge University Press, 2012) and Machine Learning in Non-Stationary Environments: Introduction to Covariate Shift Adaptation (MIT Press, 2012).
Introduction to Reinforcement Learning. Model-Free Policy Iteration. Policy Iteration with Value Function Approximation. Basis Design for Value Function Approximation. Sample Reuse in Policy Iteration. Active Learning in Policy Iteration. Robust Policy Iteration. Model-Free Policy Search. Direct Policy Search by Gradient Ascent. Direct Policy Search by Expectation-Maximization. Policy-Prior Search. Model-Based Reinforcement Learning. Transition Model Estimation. Dimensionality Reduction for Transition Model Estimation.
Erscheint lt. Verlag | 5.6.2015 |
---|---|
Reihe/Serie | Chapman & Hall/CRC Machine Learning & Pattern Recognition |
Zusatzinfo | 3 Tables, black and white; 114 Illustrations, black and white |
Sprache | englisch |
Maße | 156 x 234 mm |
Gewicht | 340 g |
Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
Technik ► Elektrotechnik / Energietechnik | |
Wirtschaft ► Volkswirtschaftslehre ► Ökonometrie | |
ISBN-10 | 1-4398-5689-3 / 1439856893 |
ISBN-13 | 978-1-4398-5689-5 / 9781439856895 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich