This meta-issue tracks the progresses towards a more up-to-the-task human avatar
The 'high-level' aims are:
clean, lightweight, good-looking model, with good-looking animations
no more human-specific sensors/actuators (at least for the main
features) -> use of regular armatures, motion actuators, etc
cleaner code, less depend on Blender internal, to implement the human
special features like the first person shooter perspective.
Tasks
[x] 3D model
[x] clean meshes build upon MakeHuman models (~3700 faces - still quite large)
[x] skinning in the BGE
[x] nice textures (thanks MakeHuman) -- caucasian male model for now
[x] walk:
[x] basic walk animation implemented, corresponding Blender's 'action': walk
[x] add motion actuators to the human (at first, a keyboard actuator, then a
waypoint actuator, then...) to move it, and to start/stop the walk cycle
[ ] gestures:
[x] IK for arms, legs, head works reasonably well
[ ] make it easy for the human to point at something
[ ] re-enable and test grasping of objects (see also #569)
[ ] create new animations
[ ] sitting
[ ] standing
[ ] first person perspective (if needed!):
[ ] add a 'head' camera to see what the human sees
[ ] port the previous user interface (using keys + mouse) to create a real
'first person' user experience where we fully control the human, including
pick and place tasks.
[x] interface to external skeleton capture devices like kinect -> non-issue as the model rely on a standard armature. Issue #653 tracks the development of external scripts to control the avatar via dedicated peripherals like the kinect
Related issues
User report on various bugs with the human: #589
generic integration with MakeHuman: #503 (this work has stalled. Needs to be re-taken over!)
This meta-issue tracks the progresses towards a more up-to-the-task human avatar
The 'high-level' aims are:
Tasks
walk
Related issues