A Neuro-Dynamic Architecture for Autonomous Visual Scene Representation - Stephan Zibner

A Neuro-Dynamic Architecture for Autonomous Visual Scene Representation

(Autor)

Buch | Softcover
201 Seiten
2017
Dr. Hut (Verlag)
978-3-8439-3009-3 (ISBN)
84,00 inkl. MwSt
  • Keine Verlagsinformationen verfügbar
  • Artikel merken
Humans have a unique ability to interact with objects in their vicinity. Foundation of these interactions is the visual perception of scenes, from which internal representations are created. Behaviors such as reaching and grasping, as well as generation and understanding of utterances, build on these representations. Processing of visual scenes is a major challenge for robotics research, especially if scenes are novel or dynamic. In this thesis, I present a neuro-dynamic scene representation architecture. It creates working memory representations of scenes, updates memory content on change, and is able to re-instantiate accumulated knowledge about the scene to efficiently search for target objects. At the core of the architecture, three-dimensional dynamic fields associate the spatial position of objects with their visual features such as color or size. The main focus of my work is the behavioral organization of involved behaviors and the resulting autonomy of processes. I evaluate the behaviors and processes generated by this architecture on robotic platforms and compare the evaluation with behavioral signatures of human scene representation. I extend the principles of scene representation onto two applications: object recognition and movement generation. The integration with object recognition allows to locate target objects by means of abstract labels, whose generation is computationally demanding and thus cannot be applied in parallel to a visual scene. Instead, these labels are memorized in a sequential process provided by the scene representation architecture. Movement generation benefits from the continuous link to visual input and autonomous organization of behaviors. Changes in target position are continuously integrated into the current movement. This property of on-line updating is also found in human arm movements. I conclude with a perspective on advanced integrative work in the context of robotic grasping.
Erscheinungsdatum
Reihe/Serie Elektrotechnik
Verlagsort München
Sprache englisch
Maße 148 x 210 mm
Gewicht 338 g
Einbandart Paperback
Themenwelt Technik Elektrotechnik / Energietechnik
Schlagworte Autonomous Visual Scene Representation • Behavior • Neuro-Dynamic Architectures
ISBN-10 3-8439-3009-0 / 3843930090
ISBN-13 978-3-8439-3009-3 / 9783843930093
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Wegweiser für Elektrofachkräfte

von Gerhard Kiefer; Herbert Schmolke; Karsten Callondann

Buch | Hardcover (2024)
VDE VERLAG
48,00