Open GregorKobsik opened 10 months ago
@GregorKobsik
Hi, Kobsik~
Have you figured out this? Seems that when setting W_CST
to 0, the network doesn't learn a translation vector, and use the centroid of segmentation assignment instead. Do you have any idea on which one is better?
Hi @SilenKZYoung
I'm working on the code and trying to reconstruct your results. The pretrained checkpoints work fine and create the described results in your paper.
But I discovered, that the
W_CST
-Loss ist set to 0 per default in training.Does it hurt the performance of the model? I could also not find any note of it in the published paper.
Thus the cuboid vector $p_m$ is only trained for rotation $r_m$, scale $s_m$ and existence $\delta_m$, and the translation $t_m$ does not appear in the loss. This leads to the position of the cuboids to be randomly distributed in the 3D space, which is not the case with the pretrained checkpoints.
This design decision seems somewhat counterintuitive. Could you provide some ratio on this choice?