Contact: Stefan Ehrlich

Neuro-ergonomic human-robot interaction (HRI): Error-related potentials based passive brain-computer interfaces for HRI

Abstract: Successful collaboration requires interaction partners to constantly monitor and predict each other’s behavior to establish shared representations of goals and intentions (Tomasello & Carpenter, 2007). In human-robot interaction (HRI) this aspect is particularly challenged by ongoing technical limitations in robot perception and reasoning about the human partner’s behavior and underlying intentions (Hayes & Scassellati, 2013). The human partner on the other hand is not just capable of judging the robot behavior but also to oversee the overall interaction performance. Studies have demonstrated that erroneous or unexpected robot actions engage error-/performance processes in the human partner’s brain, which manifest as error-related potentials (ErrPs), observable and reliably decodeable using non-invasive electroencephalography (EEG) (Ehrlich & Cheng, 2016). Real-time decoded ErrPs constitute a valuable source of information about the human partner’s expectations and subjective preferences (Botvinick et al., 2004) and therefore could serve as a useful complement to existing methods for validating and improving HRI or human-machine interaction in general (Kim et al., 2017). The central question of this research is to develop methods to harvest and utilize these ErrPs for validation or adaption of robot behavior in HRI. A particular focus lies on investigating the usability of ErrPs for mediating co-adaptation in social interactive and collaborative HRI where robots appear as intentional agents and mutual adaptation between human and robot is required (Ehrlich & Cheng, 2018).


Tomasello, M., & Carpenter, M. (2007). Shared intentionality. Developmental science, 10(1), 121-125.

Hayes, B., & Scassellati, B. (2013). Challenges in shared-environment human-robot collaboration. learning, 8(9).

Botvinick, M. M., Cohen, J. D., & Carter, C. S. (2004). Conflict monitoring and anterior cingulate cortex: an update. Trends in cognitive sciences, 8(12), 539-546.

Kim, S. K., Kirchner, E. A., Stefes, A., & Kirchner, F. (2017). Intrinsic interactive reinforcement learning–Using error-related potentials for real world human-robot interaction. Scientific reports, 7(1), 17562.

Ehrlich, S., & Cheng, G. (2016, November). A neuro-based method for detecting context-dependent erroneous robot action. In Humanoid Robots (Humanoids), 2016 IEEE-RAS 16th International Conference on (pp. 477-482). IEEE.

Ehrlich, S. K., & Cheng, G. (2018). Human-agent co-adaptation using error-related potentials. Journal of neural engineering, 15(6), 066014.

Brain Sound Computer Interface

Decoding sound sequences from human brain by machine learning techniques

Contact: Alireza Malekmohammadi

Description: When we hear a melody, music, or any other sequential sounds frequently, we could memorize it in way that it is possible to anticipate the rest of music by hearing the first part of that. It means the music or melody is encoded/stored inside of the brain in such a way that we could imagine/retrieve these sequences of sounds. Although considerable achievement has been done during the last decade in understanding of processing complex sounds by brain, the coding and storage of sequential tones are sill poorly understood (Rauschecker, Josef P, 2005). One of the unsolved challenging in understanding human brain particularly auditory neuroscience is how brain encodes sequences of sounds and stores them (Rauschecker, Josef P, 2011), and How we could decode brain regarding these stored sounds. Sequences of sound could be familiar or unfamiliar to human auditory part of brain. In other words, we are planning to address this question that how brain neurons interacts with each other regarding both familiar and unfamiliar continuous auditory stimulus.


Contact: Zied Tayeb

A neuroprosthesis is a device that has a direct interface with the nervous system and supplements or substitutes functionality in the patient’s body. Regarding the increasing consumer base of amputees, neuroprosthetic research has gained momentum over the last decades. However, current neuroprostheses still exhibit various drawbacks, such as low controllability, high power consumption, and lack of sensory feedback. This research project investigates the design of a closed-loop control system for upper-limb hand prostheses using hybrid brain-computer interfaces. First, motor imagery movements from electroencephalography (EEG) signals as well as different hand poses from electromyography (EMG) signals are decoded in real-time. Second, a prosthetic hand is controlled to perform complex reach-to-grasp movements using decoded information from EEG and EMG. Thereafter, measured sensory information from the prosthetic hand is encoded and translated into vibro-tactile and/or electro-tactile stimulation to provide a more natural sensation spanning a range of tactile stimuli. Finally, we use EEG activity in somatosensory regions to confirm phantom hand activation during stimulation and to differentiate between the different stimulation sites. Ultimately, this research project envisions to accelerate the next generation of portable, closed-loop, and intelligent prosthetic hand devices.