czq142857 / IM-NET-pytorch

PyTorch 1.2 implementation of IM-NET.
Other
196 stars 27 forks source link

Question on data preprocessing step #10

Closed supriya-gdptl closed 3 years ago

supriya-gdptl commented 3 years ago

Hello @czq142857,

Sorry to bother you again. But I have few questions on pre-processing step. I want to generate data required to run IM-NET (voxels and point-value pair) from object meshes from ShapeNetCore v1. I did following steps:

Output: image

I also visualized the voxels and point-value pairs given in the ready-to-use data that you have provided here. Used above code to visualize sampled points for the same 3 examples and the visualizations are different. image

The main difference in case of HSP sampled data is that even if the points are sampled at 16,32,64 resolutions, the actual point coordinates are from 256^3 voxels. Whereas that is not the case with earlier sampling that uses code given in here

Could you please tell me why is there a difference between pre-processed data, even if I am using same dataset ShapeNetCore.v1?Which one is correct? And if I want to get the correct data required to run IM-NET from mesh files, how can I obtain it?

Thank you, Supriya

czq142857 commented 3 years ago

Hi Supriya,

You were using an old version of the point sampling code. Please use the correct version provided in this repo. Here's the link: https://github.com/czq142857/IM-NET/tree/master/point_sampling

You need to prepare 256^3 voxels to use that code. You also need to rewrite a few lines to read .binvox files instead of .mat files.

The old point-sampling code can be used if you wish so. You just need to make sure the points are scaled correctly. Specifically, rewrite this line in modelAE.py according to the sampling resolution (note the number 256):

            self.data_points = (data_dict['points_'+str(self.sample_vox_size)][:].astype(np.float32)+0.5)/256-0.5

Best, Zhiqin

supriya-gdptl commented 3 years ago

Thank you so much for the help @czq142857 !

I am using 2_gather_256vox_16_32_64.py to sample point-value pairs. I noticed that, you are transforming coordinates from Shapenet.v1 to Shapenet.v2 on line 104. What is the purpose of this conversion? The ready-to-use dataset is based on ShapeNetCore.v1 or ShapeNetCore.v2 dataset?

Also, just want to confirm, I don't need to use flood-filling code to make the mesh watertight, as 2_gather_256vox_16_32_64.py uses voxel carving for this purpose. Is that right?

Thank you for your time.

czq142857 commented 3 years ago

The conversion does not have a lot of meaning. You can use either Shapenet v1 or v2 coordinates based on your preferences.

The ready-to-use dataset is based on ShapeNetCore.v1, but in ShapeNetCore.v2 coordinates.

Right. You do not need to use flood-filling code to make the mesh watertight, as 2_gather_256vox_16_32_64.py uses voxel carving for this purpose.

supriya-gdptl commented 3 years ago

Thank you for the help!