Open HoiM opened 2 years ago
The closest data splits I found are these: https://gitlab.com/hzxie/Pix2Vox/-/blob/master/datasets/ShapeNet.json
This file is from the official source code of Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images (IJCV 2020). The paper mentions that they used the same data splits as in Octree Generating Networks: Efficient Convolutional Architectures for High-Resolution 3D Outputs (ICCV'17) which says:
To ensure a fair comparison, we trained networks on ShapeNet-all, the exact dataset used by Choy et al. [6]. Following the same dataset splitting strategy, we used 80% of the data for training, and 20% for testing
Hope it helps. Cheers!
The closest data splits I found are these: https://gitlab.com/hzxie/Pix2Vox/-/blob/master/datasets/ShapeNet.json
This file is from the official source code of Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images (IJCV 2020). The paper mentions that they used the same data splits as in Octree Generating Networks: Efficient Convolutional Architectures for High-Resolution 3D Outputs (ICCV'17) which says:
Hope it helps. Cheers!