Industrial Reinforcement Learning with Stabilizing Gradients
Seiten
2020
Shaker (Verlag)
978-3-8440-7597-7 (ISBN)
Shaker (Verlag)
978-3-8440-7597-7 (ISBN)
Existing automation solutions are often designed for large-scale production. Due to increasing customer demands for more and more individual products at competitive prices, new automation solutions are required that no longer follow the paradigm of mass production. These systems must be able to be put into operation with a minimum of time while having the ability to react to changing production conditions, caused for example by new products. An approach for this problem is reinforcement learning.
Reinforcement learning is the machine learning approach for learning control strategies from interaction with the environment. In industrial automation, reinforcement learning has the potential to increase process efficiency and to adapt processes on changing situations without human intervention. However, in industrial application, the reinforcement learning algorithm has to deal with uncertain processes, limited training data and high performance requirements. Current algorithms typically handle only a subset of these requirements. Therefore, this thesis proposes a novel approach combining methods from stabilizing gradients and variational inference with guided policy search. The so-called “industrial reinforcement learning with stabilizing gradients” is evaluated within the well-known FetchReach-v1 benchmark scenario and is exemplified on a vacuum bulk conveyer as real-world case study. In the FetchReach-v1 benchmark scenario, the proposed algorithm has reached a 50 % accuracy improvement in untrained situations. In the real-world case study, the algorithm outperformed prior approaches in terms of robustness to new products and data-efficiency. The results show that reinforcement learning is now applicable to industrial automation systems with an added-value.
Reinforcement learning is the machine learning approach for learning control strategies from interaction with the environment. In industrial automation, reinforcement learning has the potential to increase process efficiency and to adapt processes on changing situations without human intervention. However, in industrial application, the reinforcement learning algorithm has to deal with uncertain processes, limited training data and high performance requirements. Current algorithms typically handle only a subset of these requirements. Therefore, this thesis proposes a novel approach combining methods from stabilizing gradients and variational inference with guided policy search. The so-called “industrial reinforcement learning with stabilizing gradients” is evaluated within the well-known FetchReach-v1 benchmark scenario and is exemplified on a vacuum bulk conveyer as real-world case study. In the FetchReach-v1 benchmark scenario, the proposed algorithm has reached a 50 % accuracy improvement in untrained situations. In the real-world case study, the algorithm outperformed prior approaches in terms of robustness to new products and data-efficiency. The results show that reinforcement learning is now applicable to industrial automation systems with an added-value.
Erscheinungsdatum | 28.09.2020 |
---|---|
Reihe/Serie | Berichte aus der Steuerungs- und Regelungstechnik |
Verlagsort | Düren |
Sprache | englisch |
Maße | 148 x 210 mm |
Gewicht | 288 g |
Themenwelt | Technik ► Elektrotechnik / Energietechnik |
Schlagworte | Automatisierung • Deep learning • Künstliche Intelligenz • Maschinelles Lernen • Reinforcement Learning |
ISBN-10 | 3-8440-7597-6 / 3844075976 |
ISBN-13 | 978-3-8440-7597-7 / 9783844075977 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
DIN-Normen und Technische Regeln für die Elektroinstallation
Buch | Softcover (2023)
Beuth (Verlag)
86,00 €
Kolbenmaschinen - Strömungsmaschinen - Kraftwerke
Buch | Hardcover (2023)
Hanser (Verlag)
49,99 €