Point cloud videos exhibit irregularities and lack of order along the spatial dimension where points emerge inconsistently across different frames. To capture the dynamics in point cloud videos, point tracking is usually employed. However, as points may flow in and out across frames, computing accurate point trajectories is extremely difficult. Moreover, tracking usually relies on point colors and thus may fail to handle colorless point clouds. In this paper, to avoid point tracking, we propose a novel Point 4D Transformer (P4Transformer) network to model raw point cloud videos. Specifically, P4Transformer consists of (i) a point 4D convolution to embed the spatio-temporal local structures presented in a point cloud video and (ii) a transformer to capture the appearance and motion information across the entire video by performing self-attention on the embedded local features. In this fashion, related or similar local areas are merged with attention weight rather than by explicit tracking.
The code is tested with Red Hat Enterprise Linux Workstation release 7.7 (Maipo), g++ (GCC) 8.3.1, PyTorch (both v1.4.0 and v1.8.1 are supported), CUDA 10.2 and cuDNN v7.6.
Compile the CUDA layers for PointNet++, which we used for furthest point sampling (FPS) and radius neighbouring search:
mv modules-pytorch-1.4.0/modules-pytorch-1.8.1 modules
cd modules
python setup.py install
If you find our work useful in your research, please consider citing:
@inproceedings{fan21p4transformer,
author = {Hehe Fan and
Yi Yang and
Mohan Kankanhalli},
title = {Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos},
booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition, {CVPR}},
year = {2021}
}