This is the PyTorch implementation for our technical report which achieves the state-of-the-art performance on the 3D instance segmentation task of the ScanNet benchmark.
pip install -r requirements.txt
We are using Python 3.5.2. And as pointed out by Issue #3, please consider using Python 3.6 and refer to SparseConvNet for related issues.
To prepare training data from ScanNet mesh models, please run:
python train.py --task=prepare --dataFolder=[SCANNET_PATH] --labelFile=[SCANNET_LABEL_FILE_PATH (i.e., scannetv2-labels.combined.tsv)]
To train the main model which predict semantics and affinities, please run:
python train.py --restore=0 --dataFolder=[SCANNET_PATH]
To validate the trained model, please run:
python train.py --restore=1 --dataFolder=[SCANNET_PATH] --task=test
To run the inference using the trained model, please run:
python inference.py --dataFolder=[SCANNET_PATH] --task=predict_cluster split=val
The task option indicates:
The "task" option can contain any combinations of these three tasks, but the earlier task must be run before later tasks. And a task only needs to be run once. The "split" option specifies the data split to run the inference.
To train the instance confidence model, please first generate the instance segmentation results:
python inference.py --dataFolder=[SCANNET_PATH] --task=predict_cluster --split=val
python inference.py --dataFolder=[SCANNET_PATH] --task=predict_cluster --split=train
Then train the confidence model:
python train_confidence.py --restore=0 --dataFolder=[SCANNET_PATH]
Predict instance confidence, add additional instances for certain semantic labels, and write instance segmentation results:
python inference.py --dataFolder=[SCANNET_PATH] --task=predict_cluster_write split=test