SamsungLabs / fcaf3d

[ECCV2022] FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection
MIT License
231 stars 37 forks source link
3d-object-detection fcaf3d mmdetection object-detection pytorch s3dis scannet sun-rgbd

PWC PWC PWC

FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection

News:

This repository contains an implementation of FCAF3D, a 3D object detection method introduced in our paper:

FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection
Danila Rukhovich, Anna Vorontsova, Anton Konushin
Samsung Research
https://arxiv.org/abs/2112.00322

Installation

For convenience, we provide a Dockerfile.

Alternatively, you can install all required packages manually. This implementation is based on mmdetection3d framework. Please refer to the original installation guide getting_started.md, replacing open-mmlab/mmdetection3d with samsunglabs/fcaf3d. Also, MinkowskiEngine and rotated_iou should be installed with these commands.

Most of the FCAF3D-related code locates in the following files: detectors/single_stage_sparse.py, dense_heads/fcaf3d_neck_with_head.py, backbones/me_resnet.py.

Getting Started

Please see getting_started.md for basic usage examples. We follow the mmdetection3d data preparation protocol described in scannet, sunrgbd, and s3dis. The only difference is that we do not sample 50,000 points from each point cloud in SUN RGB-D, using all points.

Training

To start training, run dist_train with FCAF3D configs:

bash tools/dist_train.sh configs/fcaf3d/fcaf3d_scannet-3d-18class.py 2

Testing

Test pre-trained model using dist_test with FCAF3D configs:

bash tools/dist_test.sh configs/fcaf3d/fcaf3d_scannet-3d-18class.py \
    work_dirs/fcaf3d_scannet-3d-18class/latest.pth 2 --eval mAP

Visualization

Visualizations can be created with test script. For better visualizations, you may set score_thr in configs to 0.20:

python tools/test.py configs/fcaf3d/fcaf3d_scannet-3d-18class.py \
    work_dirs/fcaf3d_scannet-3d-18class/latest.pth --eval mAP --show \
    --show-dir work_dirs/fcaf3d_scannet-3d-18class

Models

The metrics are obtained in 5 training runs followed by 5 test runs. We report both the best and the average values (the latter are given in round brackets).

For VoteNet and ImVoteNet, we provide the configs and checkpoints with our Mobius angle parametrization. For ImVoxelNet, please refer to the imvoxelnet repository as it is not currently supported in mmdetection3d for indoor datasets. Inference speed (scenes per second) is measured on a single NVidia GTX1080Ti. Please, note that ScanNet-pretrained S3DIS model was actually trained in the original open-mmlab/mmdetection3d codebase, so it can be inferenced only in their repo.

FCAF3D

Dataset mAP@0.25 mAP@0.5 Download
ScanNet 71.5 (70.7) 57.3 (56.0) model | log | config
SUN RGB-D 64.2 (63.8) 48.9 (48.2) model | log | config
S3DIS 66.7 (64.9) 45.9 (43.8) model | log | config
S3DIS
ScanNet-pretrained
75.7 (74.1) 58.2 (56.1) model | log | config

Faster FCAF3D on ScanNet

Backbone Voxel
size
mAP@0.25 mAP@0.5 Scenes
per sec.
Download
HDResNet34 0.01 70.7 56.0 8.0 see table above
HDResNet34:3 0.01 69.8 53.6 12.2 model | log | config
HDResNet34:2 0.02 63.1 46.8 31.5 model | log | config

VoteNet on SUN RGB-D

Source mAP@0.25 mAP@0.5 Download
mmdetection3d 59.1 35.8 instruction
ours 61.1 (60.5) 40.4 (39.5) model | log | config

ImVoteNet on SUN RGB-D

Source mAP@0.25 mAP@0.5 Download
mmdetection3d 64.0 37.8 instruction
ours 64.6 (64.1) 40.8 (39.8) model | log | config

Comparison with state-of-the-art on ScanNet

drawing

Example Detections

drawing

Citation

If you find this work useful for your research, please cite our paper:

@inproceedings{rukhovich2022fcaf3d,
  title={FCAF3D: fully convolutional anchor-free 3D object detection},
  author={Rukhovich, Danila and Vorontsova, Anna and Konushin, Anton},
  booktitle={European Conference on Computer Vision},
  pages={477--493},
  year={2022},
  organization={Springer}
}