carlosedubarreto / b3d_mocap_import

addon for blender to import mocap data from tools like easymocap, frankmocap and Vibe
106 stars 14 forks source link

How to bring OpenPose output into Blender #10

Open Aurosutru opened 3 years ago

Aurosutru commented 3 years ago

Multiple cameras are problematic for me. I am wondering whether EasyMocap can produce BVH output from OpenPose .json output. Do you know? Or is the only workflow to output a model that can be converted to fbx and the animation then transferred to another character rig in Blender?

OpenPose is mentioned in the Git EasyMocap project, but it's not clear how to proceed. Haven't been able to find much written instruction on EasyMocap but will now see your videos on it and your plug-in too, which I understand can now import finger mocap.

Would text readme's for these projects be helpful? I submitted a workflow pull request for Forth MocapNet. BTW, that project will output a body animation .bvh for Blender using OpenPose input. The finger mocap is working too, but is not yet made public - something to do with the author's PhD work restriction.

carlosedubarreto commented 3 years ago

Easymocap from what I saw, uses the 2d data fom openpose and triangulate it for 2 or more cameras. I tried doing for a single camera in easymocap without success.

The plugin I've done is the make easy to bringe the animations created to blender, when inside of blender you can export to bvh, fbx and others.

You could try to use VIBE, but it doesnt have finger mocap. To be honest, my experience until now, is that the finger mocap is not very useful at the moment, so if you can make a good body animation, the finger animation could be better to be done by hand (or seting a default arc as hand pose), so vibe could be a good alternative for you, and it is easier to setup than frankmocap.

Aurosutru commented 3 years ago

Vibe was fairly easy to install but is not so accurate as OpenPose, even after some tweaking. Haven't yet understood exactly how to install STAF, so Vibe might be better with that tweak.

OpenPose has the best performance of any monocular AI mocap, and can also use multiple cameras. The only problem is that it does not have .bvh output unless something like Forth MocapNet is used to make the conversion, which does not work for fingers yet. As you said, fingers may need to be done manually at this time, though finger mocap would be ideal for my requirements, which include lots of important finger motions.

For someone who is familiar with python programming in Blender it might be easy to take the 2D output from OpenPose to create joint movements, maybe in pose mode, with the same camera lens, distance and orientation with the subject as the original capture video. An OpenPose compatible rig is available for Blender from MakeHuman. This is an interesting approach to convert a video into a Blender armature action that would make a great popular plugin.

carlosedubarreto commented 3 years ago

I tried in the past to import openpose data inside blender, but it lacks the depth. I would have only x and y coordnates, so to use in 3d space, it did not work for me.

Easymocap is a way that I found to use openpose data and put it in 3d space. I dont have the code anymore, but you could adapt from the existing one that I have

Here is the code I used as the basis for Mediapipe import. with some tweaks you could import the json data from openpose to create point data inside blender.

This code alse has the creation of armature bones and some constraint the avoid odd movement.

The constraint part is the biggest one and its not necessary for the code to work.

ESQUELETOOK_mediapipe_blender_v1.zip

Aurosutru commented 3 years ago

Mediapipe is interesting and appears to be quite fast. Have not seen data on how it compares with OpenPose for accuracy.

Using Blender's existing pose capability would seem an easy way to transfer 2D OpenPose .json output into a Blender action. Instead of using a mouse in pose mode to update joint vertex positions, the OpenPose data could move each joint to its new 2D position for each frame. That would inherently convert the 2D data into 3D, starting from the root bone and working out to the finger tips.

It just needs a little python plugin programming. What to do?

The Makehuman rig to accept the OpenPose data is at: http://www.makehumancommunity.org/content/cmu_plus_face.html

carlosedubarreto commented 3 years ago

Answering your question: "It just needs a little python plugin programming. What to do?"

It need someone with the will to learn and try to apply what you are saying. If you have some programming question, I can try to assist you.

Aurosutru commented 3 years ago

Thank you for your kind offer to help.

Easymocap requires two cameras. Mediapipe is fast and accurate and has the advantage of being monocular. It appears that Esqueletook can calculate x, y and z bone angles. If so, could a combination of Esqueletook and BlendyPose (https://github.com/zonkosoft/BlendyPose) import mediapipe mocap into Blender?

A continuation of this topic is at https://blenderartists.org/t/mediapipe-for-blender-python-programming/1312620

BTW, the ability to import Audio2Face data into Blender also looks promising. In case you haven't seen it, in future versions of A2F the handling of eyebrows and eyeballs will improve as mentioned at 5:40 in https://www.nvidia.com/en-us/on-demand/session/omniverse2020-om1280/?playlistId=playList-89448db3-94d2-4411-85a1-deb8f4c4dd10

carlosedubarreto commented 3 years ago

the current version of Mocap_import already has the import mediapipe inside blender, on version0.722 its here image

thanks for the video indication. I didnt know that.

Aurosutru commented 3 years ago

Great to see that MediaPipe is useable with the Blender plugin. Some problem:

Tried generating mocap in Blender 2.93. The installation of MediaPipe was already recognized from BlendyPose installing it. Unlike BlendyPose files with .mkv extensions do not show up in the SK Generate Mocap (MediaPipe) video input dialog box. Files with .mp4 worked. The resulting armature and animations were without fingers and face tracking found in BlendyPose.

The quality of the apparent tracking and animation were not at all accurate, with jerking and random movements, unrelated to the video. Do you get good results using this approach? MediaPipe tracking of the same videos in BlendyPose is quite good, but has no connection with an armature, only onscreen display with small boxes indicating the vertices.

carlosedubarreto commented 3 years ago

So you found the biggest problem of mediapipe LOL. Its hard to create an armature for it, because joints gets very different sizes.

So, if you run only the part that brings the data to blender it will run ok (only the body) When you try to create an armature to it, it gets messy.

I have an idea to make mediapipe work better, I'm starting to test, but I dont have any idea, if it will work, and/or when it will be released.

So right now mediapipe is there and can be used, but the results are far from good with the current implementation.

Aurosutru commented 3 years ago

Joints get different sizes means the bones that connect them have varying lengths? Seems likely that even if Blender had some sort of stretchy bones, variable length bones would be a visual problem.

So the idea above of using OpenPose x, y pose information in its .json output file and letting Blender's IK solver to do the complex calculations of placing the bone in position could be simple and accurate, though slower than the essentially 2D real-time solution offered by MediaPipe. Instead of placing bone ends manually as is presently done in Blender with a mouse, a python program would sequentially move each bone end into its position for each frame.

The outline for such a plugin is at https://github.com/Aurosutru/Blender-plugin-for-OpenPose-import/tree/main

carlosedubarreto commented 3 years ago

I'm doing some testing with Mediapipe using 3 lines of code from blendypose that you suggested.

and I must say that I got very happy with the results. If you'd like to test, I shared the blend file with code in

https://www.patreon.com/posts/dev-code-preview-53279931

And to be honest, about openpose, I'll probably wont try that path, only if I cant find another way. 😅

I'll spend some more time tweaking mediapipe, because if it works the way I'm thinking.... it will be the best.

I say the best because, mediapipe allow us to use it for commercial purposes (different from opnepose that we cannot use it commercially) and medipipe is very fast.

Aurosutru commented 3 years ago

These are good arguments in favor of MediaPipe. The only reason I have been favoring OpenPose is that one possible output is a .json file that can seemingly be used in Blender, rather than onscreen output that is usually seen from MediaPipe.

Glad that BlendyPose was useful. Will check out your blend file and am interested to see where MediaPipe leads.

carlosedubarreto commented 3 years ago

Mediapipe has other advantages, it works on cpu, and there is facil pose estimation and hands too.

I'll see how the body goes, and then probably I will do the hand and face mocap using mediapipe.

An I must add, if you didnt said about blendypose, I wouldnt try mediapipe again anytime soon.

So, I'm working again with mediapipe because of you, and thanks a lot for that. If all goes the way I expect, we can have great things using it.

Animation might get easy for everyone, even people that cant animate.

Aurosutru commented 3 years ago

Tried the blend file with a test video and it works well. Some jerking is there, which might be solved by the smoothing incorporated into BlendyPose. There are 4-5 instances of "smooth" in blendy_pose.py

Looking forward to finger mocap. The previous version of BlendyPose, which only worked through Blender 2.92, had a provision for choosing body only; or body, fingers and face. I have a copy of that version if you are interested. The developer removed some of that functionality in the current version in an attempt to make the plugin work with 2.93.

Audio2Face will probably get the job of doing my lip-sync requirements, which it does very well, even with non-English speech.

I'll be a good test case representing people who cannot animate. The next challenge will be, can I act?

carlosedubarreto commented 3 years ago

Tried the blend file with a test video and it works well. Some jerking is there, which might be solved by the smoothing incorporated into BlendyPose. There are 4-5 instances of "smooth" in blendy_pose.py

Thats great news !!!! But I must say, probably that skeleton wont be usefull, because of the way the data is imported in blender. I'm a few days tring to solve this akward rotation that happens, and at the same time, transfering the movement to a rigify rig. I think using rigify as a reference for the motion might be more usefull to more people.

Looking forward to finger mocap. The previous version of BlendyPose, which only worked through Blender 2.92, had a provision for choosing body only; or body, fingers and face. I have a copy of that version if you are interested. The developer removed some of that functionality in the current version in an attempt to make the plugin work with 2.93.

Thanks, at the moment its not needed.. And if he posted the original code to github, it wont be difficult to download it. But thanks anyway.

Audio2Face will probably get the job of doing my lip-sync requirements, which it does very well, even with non-English speech.

Yeah I tested it with portuguese and the resuylt impressed me. I just didnt try to move the face animation to a full body (never got to that part 😅)

I'll be a good test case representing people who cannot animate. The next challenge will be, can I act? LOL.. the same thought I have.

And thats great. I started mocap import to help people that just cant animate. Its a long way to go to finish the product, to have a more easy way to integrate all the tools... Hold on, that more interesting things are "baking on the oven" 😁

Just need more time left.....