CuiRuikai / Partial2Complete

[ICCV 2023] P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds
MIT License
158 stars 9 forks source link

Individual dataset #26

Open w0928 opened 2 months ago

w0928 commented 2 months ago

How should I build my own EPN3D dataset to train the model, and do I need to de-noise, downsample, etc. before training the model, thanks for your contribution.

CuiRuikai commented 2 months ago

If you are using the same 3D-EPN data split (i.e. the EPN3D.json file), you only need to modify the _get_transforms function in the datasets/EPNDataset.py and add your custom pre-processing in datasets/data_transforms.py

For example, in my dataset class, I used 3 pre-processing steps as shown below:

    return data_transforms.Compose([{
        'callback': 'RandomSamplePoints',  # random permutate points
        'parameters': {
            'n_points': self.npoints
        },
        'objects': ['partial', 'complete']
    }, {
        'callback': 'RandomMirrorPoints',
        'objects': ['partial', 'complete']
    },{
        'callback': 'ToTensor',
        'objects': ['partial', 'complete']
    }])

They are executed sequentially. You can add your own pre-processing callbacks in datasets/EPNDataset.py, and apply these steps by adding them to the above part.

Let me know if you have any further questions

w0928 commented 2 months ago

I understand, thanks for your prompt reply!

w0928 commented 2 months ago

If you are using the same 3D-EPN data split (i.e. the EPN3D.json file), you only need to modify the _get_transforms function in the datasets/EPNDataset.py and add your custom pre-processing in datasets/data_transforms.py

For example, in my dataset class, I used 3 pre-processing steps as shown below:

    return data_transforms.Compose([{
        'callback': 'RandomSamplePoints',  # random permutate points
        'parameters': {
            'n_points': self.npoints
        },
        'objects': ['partial', 'complete']
    }, {
        'callback': 'RandomMirrorPoints',
        'objects': ['partial', 'complete']
    },{
        'callback': 'ToTensor',
        'objects': ['partial', 'complete']
    }])

They are executed sequentially. You can add your own pre-processing callbacks in datasets/EPNDataset.py, and apply these steps by adding them to the above part.

Let me know if you have any further questions

I noticed that each complete point cloud in the EPN3D dataset corresponds to 8 partial point clouds, how are these 8 partial point clouds obtained and are there any requirements for these 8 partial point clouds? Can I get the .pcd file first and then convert it to .npy file for training, and do I need to process the obtained pcd?

CuiRuikai commented 2 months ago

The partial shapes are processed by 3D-EPN [1] from ShapeNet shapes, then pcl2pcl [2] used this dataset for point cloud completion evaluation. They introduced how they process the data. However, the original version used in pcl2pcl and Cycle4Completion have some misalignment issues. Subsequently, the authors of MFM-Net fixed this issue and the one shared in the dataset link of this repo is this version of 3D-EPN dataset. However, I do not have the original pcd files, but the npy and pcd file are both storing the points, they can be converted to the other one without loss of data.

If you want to process your own data like 3D-EPN. There are plenty of method to do this. Basically, partial shapes are created by virtual scans. You can use BlenSor [5] to simulate this. However, I noticed that 3D-EPN partial shape tend to more complete than PCN [6]. To achieve this, you may have to combine multiple virtual scans as one partial shape.

[1] Dai, Angela, Charles Ruizhongtai Qi, and Matthias Nießner. "Shape completion using 3d-encoder-predictor cnns and shape synthesis." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [2] Chen, Xuelin, Baoquan Chen, and Niloy J. Mitra. "Unpaired point cloud completion on real scans using adversarial training." arXiv preprint arXiv:1904.00069 (2019). [3] Wen, Xin, et al. "Cycle4completion: Unpaired point cloud completion using cycle transformation with missing region coding." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. [4] Cao, Zhen, et al. "Mfmnet: Unpaired shape completion network with multi-stage feature matching." ArXiv, vol. abs/2111.11976 (2021). [5] https://www.blensor.org/ [6] Yuan, Wentao, et al. "Pcn: Point completion network." 2018 international conference on 3D vision (3DV). IEEE, 2018.

w0928 commented 2 months ago

. Basically, partial shapes are created by virtual scans

Thank you for your reply, I would like to capture or use 3D reconstruction techniques to get the complete point cloud by myself, and then manually crop to get a partial point cloud to simulate the partial point cloud that I might get in a real situation, is it advisable to do so? Thanks again for your reply, this helped me a lot.

CuiRuikai commented 2 months ago

If you are using a depth camera, and a complete shape means combining all the depth images to form the shape. If you only use part of the depth image, then the reconstructed shape would be a partial shape. If you only have a complete shape represented as a point cloud, then you may need to project the point cloud to a 2D plane, then back project it to 3D.