zju3dv / ENeRF

SIGGRAPH Asia 2022: Code for "Efficient Neural Radiance Fields for Interactive Free-viewpoint Video"
https://zju3dv.github.io/enerf
Other
414 stars 28 forks source link

resolution ratio of input image #23

Open chky1997 opened 1 year ago

chky1997 commented 1 year ago

Hi, it seems a little blurry when using your gui_human.py to visualize the results. Does the resolution ratio (input_ratio in the yaml) that cause the problem? Will the result seems much clearer if the parameter set to 1.0 for training and inference? Thank you!

haotongl commented 1 year ago

Although I haven't conducted that particular experiment yet, my experience with other datasets suggests that training a model with full views (21 views for ZJU-MoCap) and an input ratio of 1.0 can lead to optimal rendering results.

chky1997 commented 1 year ago

About the outdoor dataset, what's the resolution ratio when your cameras record the videos? Do you resize the images to 1024*1024 just after recording, before getting the smpl keypoints? In project page, the video of outdoor dataset also seems clearer than zjumocap dataset. Is there any difference between the two dataset during the recording stage?

haotongl commented 1 year ago

The zjumocap dataset is captured with 21 industrial cameras (2048x2048). We resize the images to 1024*1024. I think the estimation of smpl keypoints under different resolutions will not affect the rendering results a lot since it is only used to defined a bbox to bound the foreground region.

The outdoor dataset is captured with 18 GoPro Cameras (1920x1080). We keep the original resolution.

chky1997 commented 1 year ago

About the outdoor dataset, I found the vhull dir contains the 3D bbox information. But I wonder how to get background.ply. Is it generated from the 18 background images? Also, I noticed outdoor dataset no longer needs the smpl points, it just needs the human images, human 3d mask (generated from 2d mask and converted to 3d using camera intri and extri) and background information, is that right? By the way, could you tell me the average distance between each gopro cameras, thank you!

haotongl commented 1 year ago
  1. Bckground.ply is the SFM sparse point cloud which is computed during calibration.
  2. Outdoor dataset does not needs human mask information. To obtain the 3d bbox, you can follow this suggestion: https://github.com/zju3dv/ENeRF/issues/27#issuecomment-1450173304
  3. About 0.1-0.3m. The specific value can be obtained by calculating the distance between camera positions through extri.yml. Units in Extri.yml have been normalized to meters.