By Tarasha Khurana*, Peiyun Hu*, David Held, and Deva Ramanan
* equal contribution
If you find our work useful, please consider citing:
@inproceedings{khurana2023point,
title={Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting},
author={Khurana, Tarasha and Hu, Peiyun and Held, David and Ramanan, Deva},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023},
}
environment.yml
. Additionally, install the chamferdist
package given inside utils/chamferdist
by navigating to that directory and doing pip install .
.models/
folder.The original code supported training with only L1, L2, or AbsRel losses. For increased flexibility in the choice of loss, we have added differentiable voxel rendering as a layer in PyTorch. Note that we do not use it ourselves, because it incurs a huge memory footprint (as gradients for the entire voxel grid are now retained in memory).
You can import the layer with:
from utils.layers.differentiable_voxel_rendering import DifferentiableVoxelRendering
This layer is expected to be used (without any initialization) in model.py
in place of dvr.render
and dvr.render_forward
like below:
pred_dist, gt_dist = DifferentiableVoxelRendering(
sigma,
output_origin,
output_points,
output_tindex
)
If participating in the CVPR '23 Argoverse2.0 4D Occupancy Forecasting challenge, please see the eval-kit.
Refer to train.sh
.
Refer to test.sh
for executing the ray-based evaluation on all points, and test_fgbg.sh
for evaluation separately on foreground and background points (only supported for nuScenes).
The ray tracing baseline is implemented and evaluated by raytracing_baseline.sh
and raytracing_baseline_fgbg.sh
.
In order to test a model trained on X on a dataset other than X, change the dataset
field in the respective model's config.
The chamferdist
package shipped with this codebase is a version of this package. Voxel rendering is an adaptation of the raycasting in our previous work.