Harry-Zhi / semantic_nerf

The implementation of "In-Place Scene Labelling and Understanding with Implicit Scene Representation" [ICCV 2021].
Other
426 stars 56 forks source link

Question about label propagation task #30

Closed Un1Lee closed 1 year ago

Un1Lee commented 1 year ago

Thanks for your publish code! When I run your code about label propagation task, I found that the semantic loss is nan because gt_label almost is 0 after single-click. You mention that "we do not apply any loss on the void regions hence the network is able to predict arbitrary classes without penalty (though in fact it tend to predict some reasonable classes based on the similarity in appearance or geometry). And the void region also does not contribute to the evaluation metrics" in #3 . So it is easy-to-understand why the semantic loss is nan. However, in this case, how can the network train to the effect that you demonstrate? What I should do?

Un1Lee commented 1 year ago

hello, the author told me some details and finally I solve this problem by set 'redution' parameter: crossentropy_loss = nn.CrossEntropyLoss(ignore_index=-1, reduction='sum') maybe torch change its funtion in different version, now it will return 0 instead of nan. I think it is beaceuse when its reduction is mean, it will divided by weight which is 0 and return nan.