autonomousvision / differentiable_volumetric_rendering

This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
http://www.cvlibs.net/publications/Niemeyer2020CVPR.pdf
MIT License
794 stars 90 forks source link

All the scenes of DTU dataset are "object-centric" in world space? #50

Closed TruongKhang closed 3 years ago

TruongKhang commented 3 years ago

Hello,

I have a question about your experiment with DTU dataset. Are the provided DTU scenes in an "object-centric" space or do you have to pre-process to get this object-centric space? I checked the DTU dataset. It is not in object-centric space, right? Thank you and looking forward to hearing your response.

Best, Khang.

TruongKhang commented 3 years ago

I've seen your answer here. https://github.com/autonomousvision/differentiable_volumetric_rendering/blob/master/FAQ.md But how do you compute this matrix? Can you suggest to me any ideas?

m-niemeyer commented 3 years ago

Hi @TruongKhang , thanks a lot for your interest in the project!

Yes, you are right, in the DTU dataset, the objects are not in an object-centric coordinate system, and we had to calculate this transformation ourselves ("scale_mat").

I see two ways how you can do this: 1.) In the DTU dataset, ground truth mesh reconstructions are provided with the vertices being in the world space. You can use all vertices to calculate the object's location and scale, and define the corresponding matrix. 2.) You can use object masks and extract the visual hull as a mesh; then you can do the same as in 1.).

Good luck with your research!