YuliangXiu / ECON

[CVPR'23, Highlight] ECON: Explicit Clothed humans Optimized via Normal integration
https://xiuyuliang.cn/econ
Other
1.11k stars 107 forks source link

What parameters is used when rendering CAPE dataset? #120

Open MIkeR794 opened 8 months ago

MIkeR794 commented 8 months ago

Hello, thank you very much for your work. I have a question about the rendering process of the CAPE dataset.

I noticed that there is a section in ICON for processing Thuman2 data (in render_batch.py). In the code, there are camera parameters:

# Camera Center
self.center = np.array([0, 0, 1.6])
self.direction = np.array([0, 0, -1])
self.right = np.array([1, 0, 0])
self.up = np.array([0, 1, 0])

At the same time, there is code to calculate the scale factor:


scan_scale = 1.8 / (vertices.max(0)[up_axis] - vertices.min(0)[up_axis])

I have successfully projected the mesh (or pseudo point cloud) of Thuman2 onto a 2D image using these parameters and obtained results aligned with your RGB images. However, when I project the mesh (or pseudo point cloud) of CAPE using the same parameters, the results do not match the RGB images directly downloaded from cape_3views. image

So, I would like to ask what parameters you used when rendering CAPE dataset? Looking forward to your reply. Thank you!