chungyiweng / humannerf

HumanNeRF turns a monocular video of moving people into a 360 free-viewpoint video.
MIT License
786 stars 86 forks source link

"metadata.json" and "--type movement" #45

Open cocoshe opened 1 year ago

cocoshe commented 1 year ago

Thanks for your marvellous research!!! I still have some questions: What are these two things used for? metadata.json in

monocular
    ├── images
    │   └── ${item_id}.png
    ├── masks
    │   └── ${item_id}.png
    └── metadata.json

and movement in

python run.py \
    --type movement \
    --cfg configs/human_nerf/zju_mocap/387/adventure.yaml 

Q1:Use "metadate" to fit the 2d person? And what if there are more than one person? Q2:What's the result of the "movement type"? Since I didn't find the rendering code in core/data/human_nerf where there are "tpose" and "free view".

haloann666 commented 7 months ago

Hello, how is the metadata.json generated? I am using RVM and VIBE but metadata.json is not generated