Open DanHalp opened 2 years ago
We can also combine it inside voxelnet, see https://github.com/tianweiy/CenterPoint/blob/5b0e574a4478086ee9686702456aaca4f4115caa/det3d/models/readers/dynamic_voxel_encoder.py#L71 for an example
I never tried but that seems possible
no, it is only used in second stage centerpoint_rcnn.yaml
yeah, it is a line of works from CornerNet to CenterNet to ours. You can read the previous papers https://arxiv.org/pdf/1904.07850.pdf https://arxiv.org/abs/1808.01244
let me know if have any specific problems
Hey there!
First, I appreciate your fast responses - it is of course not obvious :)
I have a couple of questions about the training process: 1) What's the difference between the arguments --cpkt and --pretrained_model in train.py? Isn't a pre-trained model just a model that is trained for some epochs, like a checkpoint?
2) I was wondering if there are blocks of code that are not trained, and used as pretrained models. For example, the VoxelNet part - do we actually train the subnetwork that processes the voxels into the overhead-view pseudo image?
3) Is the VoxelNet backbone responsible for both voxelizing the point cloud and creating the overhead-view pseudo?
4) We've trained the model with 10% of the training data for 80 epochs and a batch_size of 4. To our big surprise, it performed almost as good as the model trained on the full data that you referred to here: https://github.com/tianweiy/CenterPoint-KITTI/issues/9. Does it make any sense?
5) Do we use RCNNs at the first stage for centerpoint.yaml config?
6) We're struggling to understand the heatmap concept. How it is created, and how the GaussianFocalLoss loss function is applied to it. Do you have any hint where we might find some answer for beginners? Google assume we've been born with that knowledge.
Thanks :)