This repository contains a PyTorch implementation of RandLA-Net.
Clone this repository
git clone https://github.com/aRI0U/RandLA-Net-pytorch.git
Install all Python dependencies
cd RandLA-Net-pytorch
pip install -r requirements
Common issue: the setup file from torch-points-kernels
package needs PyTorch to be previously installed. You may thus need to install PyTorch first and then torch-points-kernels.
Download a dataset and prepare it. We conducted experiments with Semantic3D and S3DIS.
To setup Semantic3D:
cd RandLA-Net-pytorch/utils
./download_semantic3d.sh
python3 prepare_semantic3d.py # Very slow operation
To setup SDIS, register and then download the zip
archive containing the files here. We used the archive which contains only the 3D point clouds with ground truth annotations.
Assuming that the archive is located in folder RandLA-Net-pytorch/datasets
, then run:
cd RandLA-Net-pytorch/utils
python3 prepare_s3dis.py
Finally, in order to subsample the point clouds using a grid subsampling, run:
cd RandLA-Net-pytorch/utils/cpp_wrappers
./compile_wrappers.sh # you might need to chmod +x before
cd ..
python3 subsample_data.py
Train a model
python3 train.py
A lot of options can be configured through command-line arguments. Type python3 train.py --help
for more details.
Evaluate a model
python3 test.py
One can visualize the evolution of the loss with Tensorboard.
On a separate terminal, launch:
tensorboard --logdir runs
This work implements the work presented in RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds.
The original implementation (in TensorFlow 1) can be found here.
To cite the original paper:
@article{RandLA-Net,
arxivId = {1911.11236},
author = {Hu, Qingyong and Yang, Bo and Xie, Linhai and Rosa, Stefano and Guo, Yulan and Wang, Zhihua and Trigoni, Niki and Markham, Andrew},
eprint = {1911.11236},
title = {{RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds}},
url = {http://arxiv.org/abs/1911.11236},
year = {2019}
}
This repository is still on update, and the segmentation results we reach with our implementation are for now not as good as the ones obtained by the original paper.