mattpoggi / mono-uncertainty

CVPR 2020 - On the uncertainty of self-supervised monocular depth estimation
237 stars 24 forks source link

Sparsification plot | doubt #22

Closed sbharadwajj closed 2 years ago

sbharadwajj commented 2 years ago

Hi,

The line here is supposed to sort all the uncertainties from high to low right? But here we are simply negating it, how does this sort it in the reverse order?

Your paper also states the same Given an error metric, we sort all pixels in each depth map in order of descending uncertainty.

Regards, Shrisha

mattpoggi commented 2 years ago

Hi, Sorting and sampling are implemented by computing percentiles (117), e.g. 2, 4, 6 and so on, and keeping all pixels below a certain percentile (118).

To sample in descending order of uncertainty, we can negate uncertainty and follow to the aforementioned implementation. In alternative, we can keep uncertainty positive, sample percentiles in reverse order, e.g. 98, 96, 94 and so on, and keep pixels above the percentile itself. The two are equivalent.

Hope this helps

sbharadwajj commented 2 years ago

Yes, that makes complete sense. I didn't look closely and thought that line 117 & 118 was just picking indicies of samples that are < percentile.

I do have another doubt tho, about the intuition of AUSE. I read that this measure gives a good score even if the predictive uncertainty is low (compared to the error (depth_gt - depth_pred)). But I could not understand how we get a good score in such a scenario. Because AUSE is the area between the oracle and the plot from pred_uncert correct? So it should include the area in the case where the oracle is higher than pred_uncert. Do you have an insight on why this might be the case?

PS: thanks for the code base, it is very easy to adapt it :) Will be citing your repository for sure.

mattpoggi commented 2 years ago

Actually, the comparison is not between uncertainty and error itself, it is on the curve you plot by sparsifying depth maps (or whatever output your uncertainty refers to) according to uncertainty or the error itself.

The sparsification curve is a function of the error. If you remove points in descending order of error itself, your average error will decrease optimally (since you remove the pixels with the highest error first) and you will get the so called oracle curve.

If you remove pixels in descending order of uncertainty, you are not sure you will remove points optimally: if your uncertainty is not meaningful of the error itself, you might first remove pixels with low error, and then remove pixels with higher error. In such a case, your curve will be higher than the oracle curve (and probably, not monotone).

This is what usually happens, since it is unlikely that your estimated uncertainty is perfectly meaningful of the error itself :) so, the lower is the difference between your curve and the oracle curve, the closer your uncertainty is to optimal uncertainty.

Hope this helps

sbharadwajj commented 2 years ago

Yes, this helps a lot. Thank you soo much for your time. I just have a last doubt regarding the plots. From Section 4.5 I understand that the sparsification curve of Fig 5 is averaged over the test set. But I dont understand exactly how this was computed. I'm assuming that one of the axes represents the fraction of pixels dropped for each image, does this mean that when the first fraction of pixels are dropped, the error values over all the test images was saved and averaged?

mattpoggi commented 2 years ago

That's correct.

sbharadwajj commented 2 years ago

But if the y-axis is the error metric, shouldn't the curve start from the top and go down? or is this already the subtracted values, i.e (line 114) curve = sprase_curve - opt_curve?

mattpoggi commented 2 years ago

Yes, that is already the difference between the two curves

sbharadwajj commented 2 years ago

Thank you so much for your help so far. However, my curves are not as smooth as they are in the paper. Do you increase the intervals to get a smoother curve?

mattpoggi commented 2 years ago

As far as I remember, I used 0.02 intervals

sbharadwajj commented 2 years ago

Thanks for all the help, I will close the issue

sbharadwajj commented 2 years ago

Hi @mattpoggi , just a quick doubt for a sanity check.

In your implementation, if the value of uncertainty is high (say 1), then it means that it is highly uncertain right?

mattpoggi commented 2 years ago

Correct