This repository provides code, data and pretrained models for:
[Paper]
Toy dataset can be found on: /InterObject3D/Minkowski/training/mini_dataset/
You can download the datasets used in the paper here: :https://drive.google.com/drive/folders/1B1uTY8Y8FeCyEfwlfZfivhcAFOOfUtPC?usp=sharing
You can download the pretrained weights in: https://omnomnom.vision.rwth-aachen.de/data/3d_inter_obj_seg/scannet_official/weights/
Preferably install a python virtual env (conda) using the requirements file in the repository or use it as a guideline since the ME engine needs to be installed seperately. The code is based on the Minkowski Engine, and the documentation page contains useful instructions on how to install the Minkowski Engine.
Run for a single instance:
python run_inter3d.py --verbal=True --instance_counter_id=1 --number_of_instances=1 --cubeedge=0.05 --pretraining_weights='/media/kontogianni/Samsung_T5/intobjseg/datasets/scannet_official/weights/exp_14_limited_classes/weights_exp14_14.pth' --dataset='scannet' --visual=True --save_results_file=True --results_file_name=results_scannet_mini.txt
Run for all 5 instances in the toy dataset:
python run_inter3d.py --verbal=True --instance_counter_id=0 --number_of_instances=5 --cubeedge=0.05 --pretraining_weights='/media/kontogianni/Samsung_T5/intobjseg/datasets/scannet_official/weights/exp_14_limited_classes/weights_exp14_14.pth' --dataset='scannet' --visual=True --save_results_file=True --results_file_name=results_scannet_mini.txt
Results from our evaluation on ScanNet-val are in the InterObject3D/Minkowski/training/results folder. If you run the evaluation script please follow a similar format for the results. Then:
Go to the following directory
cd InterObject3D/Minkowski/training/
If needed adjust the results paths and run:
python evaluation/compute_noc.py
We trained our model on ScanNet-train (excluding some classes if needed for some experiments). However our setup requires adjusting it for binary (foreground/background) segmentation so every 3D scene for example with 20 objects is split into 20 scenes each one of them with a single object instance as foreground.
Go to the following directory
cd InterObject3D/Minkowski/datasetgen/
If needed adjust the results paths and run for all scenes:
python main_scannet.py --name=<scene name>
This will result in a folder containing with the adjusted input data (scropped scenes around the object, binary gt, clicks for training)
└── results/
└── crops5x5
├──scene0000_00
| ├──scene0000_00_crop_0
| ├──scene0000_00_crop_1
| ├──.....
├──scene0000_01
├──.....
└── scans5x5
├──scene0000_00
| ├──scene0000_00_crop_0
| ├──scene0000_00_crop_1
| ├──.....
├──scene0000_01
├──.....
Similar setups can be used for other training datasets
Store a list of your training scenes + object ids in a numpy array in the examples folder:
'examples/dataset_train.npy'
array([['scene0191_00', '0'],
['scene0191_00', '1'],
['scene0191_00', '2'],
...,
['scene0567_01', '18'],
['scene0567_01', '19'],
['scene0567_01', '20']], dtype='<U32')
Go to the following directory
cd InterObject3D/Minkowski/training/
If needed adjust the results paths and run:
python train.py
Copyright (c) 2021 Theodora Kontogianni, ETH Zurich
By using this code you agree to the terms in the LICENSE.
Moreover, you agree to cite the Interactive Object Segmentation in 3D Point Clouds
paper in
any documents that report on research using this software or the manuscript.
@article{kontogianni2022interObj3d,
author = {Kontogianni, Theodora and Celikkan, Ekin and Tang, Siyu and Schindler, Konrad},
title = {{Interactive Object Segmentation in 3D Point Clouds}},
journal = {ICRA},
year = {2023},
}
We would like to thank the authors of ME for providing their codebase.