yccyenchicheng / AutoSDF

237 stars 29 forks source link

Some questions about the dataset used for single-view reconstruction. #11

Closed Silverster98 closed 2 years ago

Silverster98 commented 2 years ago

I have some detailed questions about the ShapeNet dataset used for the single-view reconstruction task.

  1. Do you also use the rendered images from 3D-R2N2 as input, just like DISN?
  2. Can I directly use the generated SDF tar.gz provided by DISN?
  3. I notice that you use voxelized models provided by 3D-R2N2 for evaluation. The size of voxelized models in 3D-R2N2 is 32 32 32, while the output of your network is 64 64 64. How do you compute the IoU metric? Do you reduce the size of your output?
yccyenchicheng commented 2 years ago

Hi @Silverster98,

  1. Yes we use images from 3D-R2N2.
  2. I'm not sure about the resolution they provided in SDF_v1.tar.gz. If it is 64 3 then yes. If not, you can use their preprocessing script and change `num_sample=643` and preprocess the SNet.
  3. Yes, I downsample them into 32**3 using this function: https://github.com/xingyuansun/pix3d/blob/master/eval/eval.py#L209-L229.

Thank you!