Thanks for this amazing work. I especially enjoyed going through the experiment analysis as it was so thorough!
I have one question regarding the CONet baseline. If I understand correctly, the M-baseline reported in Table 5 is at a lower resolution of 10x128x128.
However, since the intermediate output is a feature tensor, it should be possible to interpolate in that space -- and is indeed one of the advantages of implicit representations. I am wondering if you tried a baseline where you simply upsample the feature tensor by interpolation and use that to generate high-resolution occupancy output. Or if you have any thoughts on this matter.
Hi,
Thanks for this amazing work. I especially enjoyed going through the experiment analysis as it was so thorough!
I have one question regarding the CONet baseline. If I understand correctly, the
M-baseline
reported in Table 5 is at a lower resolution of 10x128x128.However, since the intermediate output is a feature tensor, it should be possible to interpolate in that space -- and is indeed one of the advantages of implicit representations. I am wondering if you tried a baseline where you simply upsample the feature tensor by interpolation and use that to generate high-resolution occupancy output. Or if you have any thoughts on this matter.
Best, Akshay