Open AndreMouton opened 4 years ago
Hi @AndreMouton
As I replied to @SaumilShah66, it is a known problem that training with the heteroscedastic loss may be difficult because of numerical instability problems. As you noticed, we mentioned in the paper that it wasn't possible to train the heteroscedastic neural network from Kendall et al. because of numerical instability enhanced by the SoftMax layer. To address this problem when training the ADF network with the heteroscedastic loss (which we needed for sake of completeness), we initialized the network weights from the best pretrained ckpt on Resnet-18 with and without dropout. You can try it yourself, no modification to the code is needed, you only need to load one of the two available ckpts before starting to train.
I'm having a similar issue as @SaumilShah66 in that I'm getting infs in adf.Softmax and subsequently nans in the heteroscedastic softmax loss function. In the paper you seem to suggest that you do not use a heteroscedastic loss as this is intended for regression problems. Is there any reason why you're using it in the training code for the classification problem?