This issue tracks the development of human(s) support in MORSE.
Overview
Human support is an important feature of MORSE for HRI.
Currently, MORSE only support a 'first-person' avatar, which is very complex (50 DoF, code with many dependencies on Blender internals,...).
We would like to eventually have:
one 'accurate' human model for 'first-person shooter' style interaction, with simple and clean logic, and also support for up to 5 instances in the same simulation
and simpler human models, not meant for direct mouse+keyboard interaction, but suitable for 'small-crowd' simulation (>20 characters)
Autonomous navigation of humans in a scene
develop a path object that would let other objects (like humans) to move along it.
investigate detour | recast + new interface to define path to navigate
Other comments:
with a library of animations/behaviours (sitting, reaching, picking, waving hands...)
Track multiples human with kinect input - Replay gestures from registered inputs (motion capture for exemple).
Following the MORSE for HRI workshop (March 2014), this road-map has been updated as follow:
instead of supporting 2 models for humans, we propose to have only one, with different levels of (mesh) accuracy. Those could be generated for MakeHuman, as suggested in #360. Issue #503 tracks this "multi-level" human model.
we propose to completely remove the control logic for the so-called "FPS" interactive human avatar from MORSE, and move this controller outside of MORSE to a dedicated (pymorse) script. The human model in MORSE would only have regular armatures as sensors and actuators.
this would remove a lot of Blender dependencies in MORSE code, and the external pymorse controller would provide a complete example of external control of the human avatar, that could be used as a base for alternative control modalities (Kinect, motion capture, Occulus Rift,...)
This issue tracks the development of human(s) support in MORSE.
Overview
Human support is an important feature of MORSE for HRI. Currently, MORSE only support a 'first-person' avatar, which is very complex (50 DoF, code with many dependencies on Blender internals,...).
We would like to eventually have:
Autonomous navigation of humans in a scene
path
object that would let other objects (like humans) to move along it.Other comments:
Dependencies on other issues
And also: #116 (doc) #192 (unit-tests)