Open liujiyaoFDU opened 10 months ago
Apologies for the late reply. This is an incomplete experiment that snuck its way into the release version. In the accompanying toy problem, it appears that the correct aleatoric uncertainty is only recovered if we use evi = 2 * alpha
. The change to the error term (dividing by the aleatoric uncertainty) appears to improve calibration.
For further discussion on the regularization term, I recommend this paper: The Unreasonable Effectiveness of Deep Evidential Regression. Take a look at the sections on total evidence.
Hope this helps!
Hi, Thanks for your Excellent work! I have a confusion about the
NIG reg loss
term. You defined it as equ (5): $\lambda|y{kd} - \hat \mu{kd}|(2\hat \alpha{kd}+\hat \upsilon{kd})$ in your paper.But in your code, it was implemented as
This conflicts with the formula in the paper. Could you please explain why this is done, or which of the two implementations is more effective? Looking forward to your reply.