I have some detailed questions about the ShapeNet dataset used for the single-view reconstruction task.
Do you also use the rendered images from 3D-R2N2 as input, just like DISN?
Can I directly use the generated SDF tar.gz provided by DISN?
I notice that you use voxelized models provided by 3D-R2N2 for evaluation. The size of voxelized models in 3D-R2N2 is 32 32 32, while the output of your network is 64 64 64. How do you compute the IoU metric? Do you reduce the size of your output?
I'm not sure about the resolution they provided in SDF_v1.tar.gz. If it is 64 3 then yes. If not, you can use their preprocessing script and change `num_sample=643` and preprocess the SNet.
I have some detailed questions about the ShapeNet dataset used for the single-view reconstruction task.