pip install openxlab
.openxlab login # Login, input AK/SK
openxlab dataset info --dataset-repo omniobject3d/OmniObject3D-New # View dataset info
openxlab dataset ls --dataset-repo omniobject3d/OmniObject3D-New # View a list of dataset files
openxlab dataset get --dataset-repo omniobject3d/OmniObject3D-New # Download the whole dataset (the compressed files require approximately 1.2TB of storage)
If you are experiencing an error of 401: {"msg":"login required"}
with your own AKSK, please use the following AKSK:
AK: bmyqk5wpbaxl6x1vkzq9
SK: nl7kq9palyr6j3pwxolden7ezq4dwjmbgdm81yeo
You can check out the full folder structure on the website above and download a certain portion of the data by specifying the path. For example:
openxlab dataset download --dataset-repo omniobject3d/OmniObject3D-New \
--source-path /raw/point_clouds/ply_files \
--target-path <your-target-path>
For more information, please refer to the documentation.
We are also maintaining the dataset on Google Drive.
To batch-untar a specific folder of compressed files based on your requirements, use the command bash batch_untar.sh <folder_name>
.
If the untar operation is completed successfully, remove all compressed files through rm -rf <folder_name>/*.tar.gz
.
OmniObject3D_Data_Root
├── raw_scans
│ ├── <category_name>
│ │ ├── <object_id>
│ │ │ ├── Scan
│ │ │ │ ├── Scan.obj
│ │ │ │ ├── Scan.mtl
│ │ │ │ ├── Scan.jpg
├── blender_renders
│ ├── <category_name>
│ │ ├── <object_id>
│ │ │ ├── render
│ │ │ │ ├── images
│ │ │ │ ├── depths
│ │ │ │ ├── normals
│ │ │ │ ├── transforms.json
├── videos_processed
│ ├── <category_name>
│ │ ├── <object_id>
│ │ │ ├── standard
│ │ │ │ ├── images
│ │ │ │ ├── matting
│ │ │ │ ├── poses_bounds.npy # raw results from colmap
│ │ │ │ ├── poses_bounds_rescaled.npy # rescaled to world-scale
│ │ │ │ ├── sparse
├── point_clouds
│ ├── hdf5_files
│ │ ├── 1024
│ │ ├── 4096
│ │ ├── 16384
│ ├── ply_files
│ │ ├── 1024
│ │ ├── 4096
│ │ ├── 16384
Please find the examplar usage of the data for the benchmarks here.
The OmniObject3D dataset is released under the CC BY 4.0.
If you find our dataset useful in your research, please use the following citation:
@inproceedings{wu2023omniobject3d,
author = {Tong Wu and Jiarui Zhang and Xiao Fu and Yuxin Wang and Jiawei Ren,
Liang Pan and Wayne Wu and Lei Yang and Jiaqi Wang and Chen Qian and Dahua Lin and Ziwei Liu},
title = {OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception,
Reconstruction and Generation},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}