Open chky1997 opened 1 year ago
Although I haven't conducted that particular experiment yet, my experience with other datasets suggests that training a model with full views (21 views for ZJU-MoCap) and an input ratio of 1.0 can lead to optimal rendering results.
About the outdoor dataset, what's the resolution ratio when your cameras record the videos? Do you resize the images to 1024*1024 just after recording, before getting the smpl keypoints? In project page, the video of outdoor dataset also seems clearer than zjumocap dataset. Is there any difference between the two dataset during the recording stage?
The zjumocap dataset is captured with 21 industrial cameras (2048x2048). We resize the images to 1024*1024. I think the estimation of smpl keypoints under different resolutions will not affect the rendering results a lot since it is only used to defined a bbox to bound the foreground region.
The outdoor dataset is captured with 18 GoPro Cameras (1920x1080). We keep the original resolution.
About the outdoor dataset, I found the vhull dir contains the 3D bbox information. But I wonder how to get background.ply. Is it generated from the 18 background images? Also, I noticed outdoor dataset no longer needs the smpl points, it just needs the human images, human 3d mask (generated from 2d mask and converted to 3d using camera intri and extri) and background information, is that right? By the way, could you tell me the average distance between each gopro cameras, thank you!
Hi, it seems a little blurry when using your gui_human.py to visualize the results. Does the resolution ratio (input_ratio in the yaml) that cause the problem? Will the result seems much clearer if the parameter set to 1.0 for training and inference? Thank you!