hansonrobotics / robo_blender

ROS node for blender face-head-neck control
12 stars 12 forks source link

How to add tracked object? #27

Open linas opened 9 years ago

linas commented 9 years ago

I thought to see how the Beorn rig worked with tracking objects ... and noticed that it can't. There seem to be several reasons for this:

1) There is no 'target' object in the blender scene. So my question: how do I add such an object? Or rather: is this easy to do, and I should just try to do it, or is this complicated?

2) The eye and neck motions get sent to blender using the ShapeKeys, in a way I don't quite understand. Does the Beorn rig understand these? That is, If I just add the target to blender, will the Beorn head be able to track?

Gaboose commented 9 years ago

Are you talking about standard-fs-face.anim_test.blend?

Adding a tracking object is not completely trivial. There are right ways and wrong ways to do it. But we do know how. It's all blender and no python, if you're curious :)

The only complication is Beorn using tracking "boneshapes" and not "objects" in the standard-fs-face.anim_test.blend. The last time I spoke with him (4 months ago), he said he was going to change the boneshapes to objects. Did he ever do that and we didn't put it into robo_blender, @vytasrgl?

beornleonard commented 9 years ago

I don't recall that conversation, but it's a 2 minute job. Was there a reason to have it as an object rather than just manipulating the existing bone? One of the advantages of using a bone is that we have control over the origin point. With an object, it's location in space is always defined relative to the world's origin, which is currently at the base of Dmitry's neck. Have it either way, but I made it a bone on purpose.

Would you like me to add a cube and parent the head bone to it?

Gaboose commented 9 years ago

Maybe not, then. The reason was we already work with objects in world space. But converting to world coordinates when needed is easy enough. Best not to clutter up the scene. So we'd just make the code compatible. Probably by expanding controllers/primary.py to return wrapper objects that pick the available head bone/object and override location or implement world_location getters.

Besides this kind of control over origin point is something I want to do to NRegionsOfInterest cubes. It will get rid of that nasty offset: [0.008,-1,0.458737] line in inputs.yaml

beornleonard commented 9 years ago

This is also how the Eva rig works, with a head bone.

If you want me to add other controllers, let me know. It's a fairly simple task.

Gaboose commented 9 years ago

No idea what's better in the long run. But I figure, the less we intrude into blender artists workflow, the more of their work we'll be able to support in the future.

linas commented 9 years ago

I don't understand the comments about "bone-relative".

The perception synthesizer will contain a map of the entire perceived universe in some fixed world coordinate space. The position of the head will be somewhere in that coordinate space, not necessarily the origin. It is straight-forward to vector subtraction, so if the neck is in a fixed location, that can be subtracted to give neck-relative coords. The distance between object and eyes is harder, if the head can lean forwards, backwards or sideways, or if it turns. Something, somewhere, would need to track the location of the eyeballs (bones ???) in that fixed world space.

Gaboose commented 9 years ago

That's right. We already put ROIs from eye camera relative to the location of the right eyeball. That's specified in the inputs.yaml:

- name: eye_pivision
  ...
  offset: RightEye
  direction: eyefocus
  distance: 1
  scale: 1

But we didn't talk about that. Which "bone-relative" comments do you mean? To reiterate, the "head" bone (which is equivalent in effect to the "headtarget" object) location is relative to its own origin point and parented to the armature, while headtarget's location is global.