scene-verse / SceneVerse

Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"
https://scene-verse.github.io
MIT License
200 stars 3 forks source link

How to get multi-view images #9

Open ZJHTerry18 opened 5 months ago

ZJHTerry18 commented 5 months ago

Hi! Thanks for your great work! I am curious about how to get multi-view object images in the "Object Caption" step of your annotation pipeline. It seems that only a 3D point cloud and object bounding box is needed? But, how to decide the camera pose of each view, and how to render the images to make them look realistic?

I am also wondering will the code implementation for the entire SceneVerse annotation pipeline be released :)

Buzz-Beater commented 4 months ago

Thanks for the interest, for most of the datasets considered (e.g. ScanNet) it does contain RGBD videos for the original capture, you can use those for projecting 3D bounding boxes to 2D for VLMs to generate.

As for the second question, we are wrapping up the code release, stay tuned.

Hoyyyaard commented 3 months ago

Can you please provide the link of RGBD video of HM3D?

Buzz-Beater commented 2 months ago

Hi, as HM3D originally did not provide multi-view images, we did not generate RGBD videos, we instead tried synthesizing viewpoints for objects for captioning.

Hoyyyaard commented 1 month ago

Can you please share the synthesizing viewpoints?