Can our robots rely on an emotionally charged vision-for-action?
an embodied model for neurorobotics
The aim of blended cognition is to contribute to the design of more realistic and efficient robots by looking at the way humans can combine several kinds of affective, cognitive, sensorimotor and perceptual representations. This chapter is about vision-for-action. In humans and non-human primates (as well as in most of mammals), motor behavior in general and visuomotor representations for grasping in particular are influenced by emotions and affective perception of the salient properties of the environment. This aspect of motor interaction is not examined in depth in the biologically plausible robot models of grasping that are currently available. The aim of this chapter is to propose a model that can help us to make neurorobotics solutions more embodied, by integrating empirical evidence from affective neuroscience with neural evidence from vision and motor neuroscience. Our integration constitutes an attempt to make a neurorobotic model of vision and grasping more compatible with the insights proposed by the embodied view of cognition and perception followed in neuroscience, which seems to be the only one able to take into account the biological complexity of cognitive systems and, accordingly, to duly explain the high flexibility and adaptability of cognitive systems with respect to the environment they inhabit.
Ferretti, G. , Chinellato, E. (2019)., Can our robots rely on an emotionally charged vision-for-action?: an embodied model for neurorobotics, in J. Vallverd & V. C. Müller (eds.), Blended cognition, Dordrecht, Springer, pp. 99-126.
This document is unfortunately not available for download at the moment.