Na-Z / sess

[CVPR2020 Oral] SESS: Self-Ensembling Semi-Supervised 3D Object Detection
MIT License
138 stars 29 forks source link

Does the size consistency loss only affect the size residual? #12

Closed lilanxiao closed 3 years ago

lilanxiao commented 3 years ago

Hi, thank you very much for your nice work.

I have a question about the size consistency loss. The function compute_size_consistency_loss uses the following code to get the size of bounding boxes:

size_class = torch.argmax(end_points['size_scores'], -1)
...
size_base = torch.index_select(mean_size_arr, 0, size_class.view(-1))
...
size = size_base + size_residual

And the consistency loss is calculated using MSE. Since torch.argmax() is non-differentiable, this loss seems only to affect the prediction of size residual and has no direct influence on the prediction of the size class. From my point of view, the size consistency loss should use KL-divergence as an additional term to minimize the difference of size scores generated by the teacher and student (like the class consistency loss). But your code doesn't do it and still has great performance.

Is it intended behavior? Are there any intuitions behind it?

Na-Z commented 3 years ago

Hi, thanks for your interest on our work.

In our implementation, each class only has one size template. In other words, the 'size_class_label' and 'sem_cls_label' (i.e., ground truths of size class and semantic class) for one object are the same; the predictions of size class and semantic class should be similar. Hence, the 'size_residual' has more influence in the size consistency loss computation.

I think it is helpful to add an additional term to minimize the difference of size scores between two networks, if each class has multiple size templates. If you are interested to try out that, please let me know the results. :)

lilanxiao commented 3 years ago

Yeah, that makes sense. Thank you for your explanation!

I'm going to close this issue. If would get interesting results, I'm glad to share them here.