octree-nn / ocnn-pytorch

Octree-based 3D Convolutional Neural Networks
MIT License
152 stars 17 forks source link

O-CNN

Documentation

Documentation Status Downloads Downloads PyPI

This repository contains the pure PyTorch-based implementation of O-CNN. The code has been tested with Pytorch>=1.6.0, and Pytorch>=1.9.0 is preferred.

O-CNN is an octree-based sparse convolutional neural network framework for 3D deep learning. O-CNN constrains the CNN storage and computation into non-empty sparse voxels for efficiency and uses the octree data structure to organize and index these sparse voxels.

The concept of sparse convolution in O-CNN is the same with H-CNN, SparseConvNet, and MinkowskiNet. The key difference is that our O-CNN uses the octree to index the sparse voxels, while these 3 works use the Hash Table.

Our O-CNN is published in SIGGRAPH 2017, H-CNN is published in TVCG 2018, SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in CVPR 2019. Actually, our O-CNN was submitted to SIGGRAPH in the end of 2016 and was officially accepted in March, 2017. The camera-ready version of our O-CNN was submitted to SIGGRAPH in April, 2017. We just did not post our paper on Arxiv during the review process of SIGGRAPH. Therefore, the idea of constraining CNN computation into sparse non-emtpry voxels is first proposed by our O-CNN. Currently, this type of 3D convolution is known as Sparse Convolution in the research community.

Key benefits of ocnn-pytorch

Citation

  @article {Wang-2017-ocnn,
    title    = {{O-CNN}: Octree-based Convolutional Neural Networksfor {3D} Shape Analysis},
    author   = {Wang, Peng-Shuai and Liu, Yang and Guo, Yu-Xiao and Sun, Chun-Yu and Tong, Xin},
    journal  = {ACM Transactions on Graphics (SIGGRAPH)},
    volume   = {36},
    number   = {4},
    year     = {2017},
  }