Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games (eBook)
XX, 267 Seiten
Springer Nature Switzerland (Verlag)
978-3-031-45252-9 (ISBN)
Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games develops its specific learning techniques, motivated by application to autonomous driving and microgrid systems, with breadth and depth: integral reinforcement learning (RL) achieves model-free control without system estimation compared with system identification methods and their inevitable estimation errors; novel inverse RL methods fill a gap that will help them to attract readers interested in finding data-driven model-free solutions for inverse optimization and optimal control, imitation learning and autonomous driving among other areas.
Graduate students will find that this book offers a thorough introduction to integral and inverse RL for feedback control related to optimal regulation and tracking, disturbance rejection, and multiplayer and multiagent systems. For researchers, it provides a combination of theoretical analysis, rigorous algorithms, and a wide-ranging selection of examples. The book equips practitioners working in various domains - aircraft, robotics, power systems, and communication networks among them - with theoretical insights valuable in tackling the real-world challenges they face.
Bosen Lian obtained his B.S. degree from the North China University of Water Resources and Electric Power, Zhengzhou, China, in 2015, the M.S. degree from Northeastern University, Shenyang, China, in 2018, and the Ph.D. from the University of Texas at Arlington, TX, USA, in 2021. He is currently an Assistant Professor at the Electrical and Computer Engineering Department, Auburn University, Auburn, AL, USA. Prior to that, he was an Adjunct Professor at the Electrical Engineering Department, University of Texas at Arlington and a Postdoctoral Research Associate at the University of Texas at Arlington Research Institute. His research interests focus on reinforcement learning, inverse reinforcement learning, distributed estimation, distributed control, and robotics.
Wenqian Xue received the B.Eng. degree from the Qingdao University, Qingdao, China, in 2015, the M.SE. degree from the Northeastern University, Shenyang, China, in 2018, where she is currently pursuing towards the Ph.D. degree. She was a Research Assistant (Visiting Schlor) with the University of Texas at Arlington from 2019 to 2021. Her current research interests include learning-based data-driven control, reinforcement learning and inverse reinforcement learning, game theory, distributed control of multi-agent systems. She is a Reviewer of Automatica, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, etc.
Frank L. Lewis obtained the Bachelor's Degree in Physics/EE and the MSEE at Rice University, the MS in Aeronautical Engineering from Univ. W. Florida, and the Ph.D. at Ga. Tech. Fellow, National Academy of Inventors. Fellow IEEE, Fellow IFAC, Fellow AAAS, Fellow European Union Academy of Science, Fellow U.K. Institute of Measurement & Control. PE Texas, U.K. Chartered Engineer. UTA Charter Distinguished Scholar Professor, UTA Distinguished Teaching Professor, and Moncrief-O'Donnell Chair at the University of Texas at Arlington Research Institute. Lewis is Ranked as number 19 in the world of all scientists in Electronics and Electrical Engineering by Research.com. Ranked number 5 in the world in the subfield of Industrial Engineering and Automation according to a Stanford University Research Study in 2021. 80,000 Google Scholar Citations, H-index 123. He works in feedback control, intelligent systems, reinforcement learning, cooperative control systems, and nonlinear systems. He is author of 8 U.S. patents, numerous journal special issues, 445 journal papers, 20 books, including the textbooks Optimal Control, Aircraft Control, Optimal Estimation, and Robot Manipulator Control. He received the Fulbright Research Award, NSF Research Initiation Grant, ASEE Terman Award, Int. Neural Network Soc. Gabor Award, U.K. Inst Measurement & Control Honeywell Field Engineering Medal, IEEE Computational Intelligence Society Neural Networks Pioneer Award, AIAA Intelligent Systems Award, AACC Ragazzini Award. He has received over $12M in 100 research grants from NSF, ARO, ONR, AFOSR, DARPA, and USA industry contracts. Helped win the US SBA Tibbets Award in 1996 as Director of the UTA Research Institute SBIR Program.
Hamidreza Modares received the B.S. degree from the University of Tehran, Tehran, Iran, in 2004, the M.S. degree from the Shahrood University of Technology, Shahrood, Iran, in 2006, and the Ph.D. degree from the University of Texas at Arlington, Arlington, TX, USA, in 2015. He is currently an Assistant Professor in the Department of Mechanical Engineering at Michigan State University. Prior to joining Michigan State University, he was an Assistant professor in the Department of Electrical Engineering, Missouri University of Science and Technology. His current research interests include control and security of cyber-physical systems, machine learning in control, distributed control of multi-agent systems, and robotics. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems.
Bahare Kiumarsi received the B.S. degree in electrical engineering from the Shahrood University of Technology, Iran, in 2009, the M.S. degree in electrical engineering from the Ferdowsi University of Mashhad, Iran, in 2013, and the Ph.D. degree in electrical engineering from the University of Texas at Arlington, Arlington, TX, USA, in 2017. In 2018, she was a Post-Doctoral Research Associate with the Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL, USA. She is currently an Assistant Professor with the Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, USA. Her current research interests include machine learning in control, security of cyber-physical systems, game theory, and distributed control.Erscheint lt. Verlag | 5.3.2024 |
---|---|
Reihe/Serie | Advances in Industrial Control |
Zusatzinfo | XX, 267 p. 43 illus., 41 illus. in color. |
Sprache | englisch |
Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
Mathematik / Informatik ► Mathematik | |
Technik ► Bauwesen | |
Technik ► Elektrotechnik / Energietechnik | |
Technik ► Fahrzeugbau / Schiffbau | |
Schlagworte | Adaptive Dynamic Programming • Differential Games • H-infinity Control • Integral Reinforcement Learning • Inverse Reinforcement Learning for Optimal Feedback Control • Optimal regulation • Optimal Tracking • Reinforcement Learning for Optimal Feedback Control |
ISBN-10 | 3-031-45252-6 / 3031452526 |
ISBN-13 | 978-3-031-45252-9 / 9783031452529 |
Haben Sie eine Frage zum Produkt? |
Größe: 9,2 MB
DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasserzeichen und ist damit für Sie personalisiert. Bei einer missbräuchlichen Weitergabe des eBooks an Dritte ist eine Rückverfolgung an die Quelle möglich.
Dateiformat: PDF (Portable Document Format)
Mit einem festen Seitenlayout eignet sich die PDF besonders für Fachbücher mit Spalten, Tabellen und Abbildungen. Eine PDF kann auf fast allen Geräten angezeigt werden, ist aber für kleine Displays (Smartphone, eReader) nur eingeschränkt geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich