zju3dv / EasyMocap

Make human motion capture easier.
Other
3.43k stars 439 forks source link

convert2bvh.py reshape error #380

Open damon-93 opened 6 months ago

damon-93 commented 6 months ago

使用scripts\postprocess\convert2bvh.py中rodrigues2bshapes函数reshape参数错误: 运行命令: blender.exe -b -t 12 -P scripts/postprocess/convert2bvh.py -- ./output/sv1p/smpl --o ./output/bvh/ 报错: File "E:\code\EasyMocap\scripts/postprocess/convert2bvh.py", line 116, in apply_trans_pose_shape mrots, bsh = rodrigues2bshapes(pose) File "E:\code\EasyMocap\scripts/postprocess/convert2bvh.py", line 107, in rodrigues2bshapes rod_rots = np.asarray(pose).reshape(24, 3) ValueError: total size of new array must be unchanged

我经过调试发现代码改成:将rod_rots = np.asarray(pose).reshape(24, 3) 改成rod_rots = np.asarray(pose).reshape(23, 3):因为debug发现pose的total size=23*3,可以顺利导出bvh,但是动作非常怪异

请问我是在哪一步操作错误了吗?

eren-ture commented 6 months ago

I am having the same problem right now, with monocular data. From what I can see it is a problem with reshaping the PARE output.

I am a beginner to working with a project this big and this many working parts, but I tried the PARE model on it's own before I found out about EasyMoCap. The output .pkl file coming from PARE gives a (24, 3) shape. I think there might be a problem here in the forward function of the PARE model.

This is the command that I am running to get 1v1p output: python apps\mocap\run.py --data config/datasets/svimage.yml --exp config/1v1p/hrnet_pare_finetune.yml --root data/examples/test --subs 23EfsN7vEOA+003170+003670

Also, looking at this closed issue @carlosedubarreto created a MoCap import where you can import the keypoints3d to Blender.

carlosedubarreto commented 6 months ago

Hello @eren-ture , Oh my, these kinds of problems gives me chills. Bad memories LOL.

I whish I could help you both but I dont know what I can do.

I just remember that I was able to make the process to work in blender using SMPL and SMPL-X models (the smpl-x was to use the finger tracking)

Maybe taking a look on the code from the addon could help. I also shared some code from other projects like 4d humans. maybe the information on this issue from 4d humans can help you all

https://github.com/shubham-goel/4D-Humans/issues/32

Have a great new year

eren-ture commented 5 months ago

I found a way to do it!

Running:

python .\scripts\preprocess\extract_video.py .\0_input\test_02 --mode openpose --openpose ..\openpose\ --handface
python .\apps\demo\mocap.py 0_input/test_02 --fps 30 --mono --mode smpl
blender-2.79a-windows64\blender -b -t 12 -P scripts\postprocess\convert2bvh.py -- 0_input\test_02\output-output-smpl-3d\smplfull --o .\1_output

You get an smplfull output for the monocular data.

The only extra step is that you need to install openpose. (It gives way better results anyways :D)