I have successfully projected the mesh (or pseudo point cloud) of Thuman2 onto a 2D image using these parameters and obtained results aligned with your RGB images. However, when I project the mesh (or pseudo point cloud) of CAPE using the same parameters, the results do not match the RGB images directly downloaded from cape_3views.
So, I would like to ask what parameters you used when rendering CAPE dataset?
Looking forward to your reply. Thank you!
Hello, thank you very much for your work. I have a question about the rendering process of the CAPE dataset.
I noticed that there is a section in ICON for processing Thuman2 data (in render_batch.py). In the code, there are camera parameters:
At the same time, there is code to calculate the scale factor:
I have successfully projected the mesh (or pseudo point cloud) of Thuman2 onto a 2D image using these parameters and obtained results aligned with your RGB images. However, when I project the mesh (or pseudo point cloud) of CAPE using the same parameters, the results do not match the RGB images directly downloaded from cape_3views.![image](https://github.com/YuliangXiu/ECON/assets/68797536/79022791-4b41-4370-b479-b3e55e544076)
So, I would like to ask what parameters you used when rendering CAPE dataset? Looking forward to your reply. Thank you!