brainhackorg / global2022

Github repo for the brainhack 2022 website and event
https://brainhack.org/global2022/
MIT License
3 stars 6 forks source link

Safer Multimodal Teleoperation of (Simulated) Robots #163

Open Remi-Gau opened 1 year ago

Remi-Gau commented 1 year ago

Added as an issue for book keeping

Source: https://github.com/BrainhackWestern/BrainhackWestern.github.io/wiki/Projects

Team Leaders:

Pranshu Malik (@pranshumalik14)

Being confused, freezing, or panicking while trying hard to stop, re-direct, or stabilize a drone (or any such robot/toy) in sudden counter-intuitive poses or environmental conditions is likely a relatable experience for all of us. The idea here is about enhancing the expression of our intent while controlling a robot remotely — either in real life or on a computer screen (simulation) — while not replacing the primary instrument of control (modality) but instead by also integrating our brain states (thought) in the control loop as measured, for example, through EEG. Specifically, for the scope of the hackathon, this could mean developing a brain-machine interface for automatically assisting the operator in emergency cases with “smart” control command reflexes or “takeovers”. Such an approach can be beneficial in high-risk cases such as remote handling of materials in nuclear facilities or it can also aid the supervision of autonomous control, say in the context of self-driving cars, to ultimately increase safety.

For now, we could pick a particular type of simulated robot (industrial arms, RC cars, drones, etc.) and focus on designing and implementing a paradigm for characterizing intended motion and surprise during undesired motion in both autonomous (with no user control but robot’s self- and environmental influences) or semi-autonomous cases (including user’s control commands), i.e., we can aim to measure intent and surprise given the user’s control commands, the brain states, and robot states during algorithmically curated cases of robot motion. This will help us detect such situations and also infer desired reactions to accordingly adjust control commands to achieve desired reactions during emergencies and, more generally, to augment real-time active control to match the desired motion. We can strive to keep the approach suitable for generalizing well enough to robots of other types and/or morphologies and to more unusual environments.