Closed pwais closed 2 years ago
Hi, the PyTorch version should be mostly in sync with the CUDA version. However, it is extremely slow (only intended for gradcheck on small batches)
It should not be hard to extract voxel opacities from the PyTorch side (there is a voxel grid "links" which has indices into the density_data array). Note that we are using density as in NeRF (any nonnegative number), not opacity.
@sxyu thanks for the comments! would love to use the CUDA version... am I citing the right density code path above for "inference" ?
Hi Paul,
Sorry for being a bit unclear. Since the CUDA is part of the PyTorch extension, to access the voxel data you do not need to go through the CUDA, the tensors grid.links
and grid.density_data
contain this information. If you want to process these in CUDA, you would have to write some additional kernels.
The part you linked is indeed the rendering code, but would not be relevant for accessing the data directly, if I understand correctly
thank you for adding the depth map tracing! https://github.com/sxyu/svox2/commit/59984d6c4fd3d713353bafdcb011646e64647cc7
I'd love to see if you folks study this issue more and report on it in the paper, but am closing for now
The paper focuses on visual reconstruction metrics... has there been any study of depth / voxel opacity yet? (e.g. on Tanks and Temples?) I could be wrong, but it looks like there is not yet any affordance for returning / extracting / debugging opacity from the CUDA impl:
https://github.com/sxyu/svox2/blob/ad1b4a816f7c2a6875880200e708f58f67707e5f/svox2/csrc/render_lerp_kernel_cuvol.cu#L74
So perhaps there has been no such study yet.
Dumb question: do the pytorch checkpoints still work if one "trains" using the CUDA kernels but "tests" using the pytorch
_volume_render_gradcheck*
code paths? (and how synchronized are the pytorch codepaths with the CUDA kernels)?Thanks for the amazing paper and for releasing the results so quickly!