This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
Now. when running python generate.py configs/multi_view_reconstruction/skull/ours_rgb_pretrained.yaml I get two meshes out:
scan65.ply with AABB within [-0.5, 0.5]
and scan65_world_scale.ply with AABB
I assume it's the latter version that should be used in the Chamfer computation, but there is a large difference in the AABBs. Am I missing some transform, or are there other reference point clouds I should use?
Also, what normalization factors (maximal edge lengths) should I use for the three DTU examples, following (Like
Fan et al. [17] we use 1/10 times the maximal edge length of the current object’s bounding box as unit 1.)?
Hello, Congrats to a great research project and thanks for releasing the source code!
I'm trying to measure Chamfer loss using DTU scan65, but I'm unsure what scale/coordinate frame the point cloud at https://s3.eu-central-1.amazonaws.com/avg-projects/differentiable_volumetric_rendering/data/DTU.zip
DTU\scan65\scan65\pcl.npz
is stored in?I tried computing the AABB around the point cloud
DTU\scan65\scan65\pcl.npz
:Now. when running
python generate.py configs/multi_view_reconstruction/skull/ours_rgb_pretrained.yaml
I get two meshes out:scan65.ply
with AABB within [-0.5, 0.5] andscan65_world_scale.ply
with AABBI assume it's the latter version that should be used in the Chamfer computation, but there is a large difference in the AABBs. Am I missing some transform, or are there other reference point clouds I should use?
Also, what normalization factors (maximal edge lengths) should I use for the three DTU examples, following (Like Fan et al. [17] we use 1/10 times the maximal edge length of the current object’s bounding box as unit 1.)?