Closed synsin0 closed 1 year ago
The MAE-style pretraining could also be used for voxel-based embedding (as far as I know, there is not much difference between voxel-based and pillar-based models). Since the baseline are pillar-based models (e.g., SPT), we only do experiments for them.
Thanks for your great work. In the code I see only the pillar implementation(assert height dimension is 1 in spt_backbone.py). I wonder whether you may support voxel-based implementation. Does GD-MAE support voxel-based embedding? Why and why not?