Please fill in the application form
to access raw data of LASA dataset. (link and data has been updated since 24th, July)
The dataset is organized as follows:
sceneid/
├── sceneid_faro_aligned_clean_0.04.ply # Cleaned and aligned laser scan of the scene
├── sceneid_arkit_mesh.ply # TSDF-based mesh reconstruction of the scene
├── sceneid_arkit_neus.ply(coming) # NeuS-based mesh reconstruction of the scene
├── sceneid_arkit_gs.ply(coming) # Gaussian Splatting reconstruction of the scene
├── sceneid_bbox.npy # Bounding box information of the scene
├── sceneid_layout.json(coming) # Layout Annotation of the scene
└── instances/
└── cadid/
├── cadid_rgbd_mesh.ply # TSDF-based mesh reconstruction of the instance
├── cadid_watertight.obj # Watertight mesh of the instance, aligned with laser
├── cadid_gt_mesh_2.obj # Artist-made Ground Truth mesh of the instance, aligned with laser
├── cadid_laser_pcd.ply # Point cloud of the instance from laser
└── alignment.txt # An alignment matrix that align annotation to rgbd mesh
Data preprocessing and preparation can be found in DATA.md. We also provide preprocessed data for download.
The training and evaluation code are under the submodule DisCo. Please refer to the DisCo. Clone this repository and the submodules by:
git clone --recurse-submodules https://github.com/GAP-LAB-CUHK-SZ/DisCo.git
We prepare a RGBD scan data obtained using iPhone Arkit, which also output object detection results.
Firstly download example_1.zip from
BaiduYun (code: r7vs). Then unzip it and put the example_1 folder at ./example_data/example_1
Then, run the following commands to run the demo:
cd demo
bash run_demo.sh
The results will be saved in ../example_output_data/example_1 further.
We will further develop a more user-friendly demo.
@inproceedings{liu2024lasa,
title={LASA: Instance Reconstruction from Real Scans using A Large-scale Aligned Shape Annotation Dataset},
author={Liu, Haolin and Ye, Chongjie and Nie, Yinyu and He, Yingfan and Han, Xiaoguang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20454--20464},
year={2024}
}