noahstier / vortx

Source code for the paper "Volumetric 3D Reconstruction with Transformers for Voxel-wise View Selection and Fusion"
MIT License
68 stars 10 forks source link

How to implement VoRTX on ShapeNet dataset #8

Open Yxs-160 opened 1 year ago

Yxs-160 commented 1 year ago

Hi! Thank you for you work and code! I would now like to implement VoRTX on the ShapeNet dataset. How can I implement VoRTX on the ShapeNet dataset?

noahstier commented 1 year ago

I would suggest you render RGB and depth images and store the camera parameters. Then use TSDF fusion to generate the ground-truth TSDF.

If you want to improve on that strategy you could try using something like https://github.com/hjwdzh/Manifold to create watertight meshes, which would allow you to compute a more accurate TSDF ground truth.

Yxs-160 commented 1 year ago

Thank you for your reply! I have been using ShapeNet dataset for single-view/multi-view 3D object reconstruction before, and I just recently started to learn about MVS, and I don't know how to render RGB and depth images as well as save camera parameters, do you have a solution? @noahstier

noahstier commented 1 year ago

You can use 3D graphics software such as Blender. For example: https://github.com/panmari/stanford-shapenet-renderer