Reinforcement Learning Aided Performance Optimization of Feedback Control Systems
Springer Fachmedien Wiesbaden GmbH (Verlag)
978-3-658-33033-0 (ISBN)
Changsheng Hua proposes two approaches, an input/output recovery approach and a performance index-based approach for robustness and performance optimization of feedback control systems. For their data-driven implementation in deterministic and stochastic systems, the author develops Q-learning and natural actor-critic (NAC) methods, respectively. Their effectiveness has been demonstrated by an experimental study on a brushless direct current motor test rig.
The author:
Changsheng Hua received the Ph.D. degree at the Institute of Automatic Control and Complex Systems (AKS), University of Duisburg-Essen, Germany, in 2020. His research interests include model-based and data-driven fault diagnosis and fault-tolerant techniques.lt;p>Changsheng Hua received the Ph.D. degree at the Institute of Automatic Control and Complex Systems (AKS), University of Duisburg-Essen, Germany, in 2020. His research interests include model-based and data-driven fault diagnosis and fault-tolerant techniques.
Introduction.- The basics of feedback control systems.- Reinforcement learning and feedback control.- Q-learning aided performance optimization of deterministic systems.- NAC aided performance optimization of stochastic systems.- Conclusion and future work.
Erscheinungsdatum | 26.03.2021 |
---|---|
Zusatzinfo | XIX, 127 p. 53 illus. |
Verlagsort | Wiesbaden |
Sprache | englisch |
Maße | 148 x 210 mm |
Gewicht | 204 g |
Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
Informatik ► Weitere Themen ► Hardware | |
Schlagworte | Actor-Critic • Fault-tolerant Control • Feedback Control Systems • machine learning • Q-Learning • Robustness Optimization |
ISBN-10 | 3-658-33033-3 / 3658330333 |
ISBN-13 | 978-3-658-33033-0 / 9783658330330 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich