Open ZJHTerry18 opened 5 months ago
Thanks for the interest, for most of the datasets considered (e.g. ScanNet) it does contain RGBD videos for the original capture, you can use those for projecting 3D bounding boxes to 2D for VLMs to generate.
As for the second question, we are wrapping up the code release, stay tuned.
Can you please provide the link of RGBD video of HM3D?
Hi, as HM3D originally did not provide multi-view images, we did not generate RGBD videos, we instead tried synthesizing viewpoints for objects for captioning.
Can you please share the synthesizing viewpoints?
Hi! Thanks for your great work! I am curious about how to get multi-view object images in the "Object Caption" step of your annotation pipeline. It seems that only a 3D point cloud and object bounding box is needed? But, how to decide the camera pose of each view, and how to render the images to make them look realistic?
I am also wondering will the code implementation for the entire SceneVerse annotation pipeline be released :)