Closed shinxg closed 5 months ago
Thanks for the question. This misalignment is included in the dataset and is mainly due to the baseline between the object and the camera. So make sure to use the particular envmap for each test image.
OK. Thank you for your clarification. Closing the issue now.
Thanks for providing such a great dataset for the inverse rendering task. I recently played on the dataset but found that ground truth envmaps are not aligned after converting the camera space envmap to the world space envmap following this function https://github.com/StanfordORB/Stanford-ORB/blob/962ea6d2cced6c9ea076fea4dc33464589036552/orb/utils/env_map.py#L12. Could you elaborate on how you convert the camera space envmap to the world space? Thanks in advance! Misaligned world space envmaps are attached. envmap_cactus_scene001_test.zip