Closed DrCyanide closed 1 year ago
I'd recommend to use movie clips and not static images as clips deliver better results. I cannot recommend to use images as videos, but I did it anyways just for a plain test. The model poses as expected, cannot reproduce what you've been describing.
The base config only supports generated meta-rigs. Usage to reproduce:
P1:
Edit for visibility setting: P2:
P3:
There are many tools out there which use rigify, but the rigs they use are NOT based on the metarig... which leads to confusion.
Most likely a new transfer setup can be generated for them. I described the process here. Also planning to make a video soon (simple workflow example on how to transfer to other rigs).
Can you give the steps above a try and let me know?
I set Display as: Solid
in the Object Properties just like you have in your screenshot, but it still shows as just the wireframe. I tried changing the Viewport Display settings under Object Data Properties, but none of them made it show either. There's probably some other setting that I'm missing.
This crumpled mess is using the generated, non-upgraded rig.
I tested it again on another computer I have which is running Blender 3.3.1 and BlendARMocap 1.6.0 and got the same results (both with the display solid and the crumpled mess)
Edit: I missed a step for showing the armature (selecting that circle on the Object Data Properties tab). Here's what it looks like with visible armature
Can you provide the file that generates the issue? Currently I cannot reproduce it. What's confusing is that the rig doesn't seem to be aligned to the points...
Deformed Sitting.zip Here's both the blend file and the saved config for that pose.
I'm guessing there's something different in a version of the plugin I'm using. Here's screenshots of both my Rigify and BlendARMocap settings.
It seems you tried to do the detection on a jpg file - I think that's the root of the issue. Sometimes mediapipes detection has to pick up on some motion to produce accurate results, showing an image for a long time is not the same as motion. However, it may produces better results than just an image.
Please try this .mp4 for reference. Note: Check the first and something like frame 10, it's often the case that the first x-frames deliver garbage.
https://github.com/cgtinker/BlendArMocap/assets/55130054/f0d7e1a4-b7af-4ea6-84b6-7d0d2dc4e563
When using videos there are some things to look out for:
prolly checkout pexels, there you'll find some nice videos for testing
OK, that's working now! After a few frames, the character is in the siting position. I must have had really bad luck between my webcam tests and sample videos. I was trying to do holistic on the webcam, but could only ever get my upper body in the shot.
Sorry if testing with still image ended up being a bit of a detour. I'm not looking to make an animation, but to pose several characters quickly for a comic. I thought having a handful of pictures to chose from would be faster and less intensive than a video trying out a bunch of poses, but I guess not.
I have the same issue with videos, but for the sake of simple reproducibility here's a photo I'm using as the reference pose. (OpenCV supports image sequences, and this is just an image sequence with length of one!)
The pose detection seems good
And the points representing the pose seems to be lined up in Blender. At this point, it feels like it should be smooth sailing.
However, when I click Transfer Animation, the results are pretty bad.
It's more obvious in video tests, but the wrists and ankles feel "pinned" in place - they'll drift a bit with the overall model but never really move up, down, in, or out from the body. Even when I lift my hands over top my head in webcam mode the transferred wrists stay at about that mid-chest level that they are in this screenshot.
The model I'm using here to demonstrate should be using the Rigify rig, although I admit to not knowing what version of the rig. I've tried it with the un-modeled Human Meta rig, generating the rig from that, and get similar results. With the unmodeled rig it's harder to tell what's going on (I don't know what settings you clicked in your video to show the rig as anything other than outlines). I'm fine re-testing with the empty rig if someone can tell me how to make it show up.
Using BlendARMocap 1.6.0 and Blender 3.4.1