Open wyiguanw opened 6 months ago
I have found a script in neuralbody, which is easymocap_to_nerualbody.py. however, it requires 'cfg_model.yml' file that is not in the output folder, what should I do?
You can get the camera parameters through run.py, inserting a few lines, before model.at_final() is run in the function "process".
import numpy as np
with open('test_output.npy', 'wb') as f:
np.save(f, ret_all)
It stores frames in an array as dictionaries, so you can .get('cameras')
per frame.
'cameras': {'K': array([[ 1296, 0, 540],
[ 0, 1296, 960],
[ 0, 0, 1]], dtype=float32),
'R': array([[ 1, 0, 0],
[ 0, 1, 0],
[ 0, 0, 1]], dtype=float32),
'T': array([[ 0],
[ 0],
[ 0]], dtype=float32),
'dist': array([[ 0, 0, 0, 0, 0]], dtype=float32)
}
The problem is that I don't know what these represent. I only know they are used in the rendering stage for the myeasymocap.io.vis3d.Render
module.
I am trying to get the exact render in Blender, but I don't know how to export these numbers into Blender. If you have more of an understanding in this, please help :D
anybody how to get vertices for mono-video? I tried to run your work on my mono-video by using: emc --data config/datasets/svimage.yml --exp config/1v1p/hrnet_pare_finetune.yml --root data/youtube0/clip0 --ranges 0 500 1 --subs youtube0
hi, thanks for your great work!
I tried to run your work on my mono-video by using:
emc --data config/datasets/svimage.yml --exp config/1v1p/hrnet_pare_finetune.yml --root data/youtube0/clip0 --ranges 0 500 1 --subs youtube0
but I notice that there are no smpl vertices and camera parameters in the output folder. how can I get them?