Closed glerzing closed 1 year ago
LogSumExp may not be ideal too, because we may like QrUnc to be higher or equal to the "true" uncertainty, not strictly lower. And because of the arbitrary choice of alpha.
But I think SoftMinMean is problematic because of its inconsistencies (e.g. SoftMinMean(0, 0) = 0, SoftMinMean(1, 0) = 0.26 and SoftMinMean(10, 0) ≈ 0).
Is using a regularized min necessary ? Should we consider using a simple minimum ? (which would simplify the formula)
@lenhoanglnh Can you look into @glerzing 's recommendations?
Maybe a session about how we handle uncertainty in our algorithms to teach the whole team would be a big help (same as we did for EigenTrust, Voting rights, Video Suggestions, ...).
I wonder why when the uncertainty increases, QrUnc first increases and then decreases (I use weight=1, x=[-10, 1, 10], default_deviation=0). Other tests show that it tends towards 0.
If I understood correctly, ideally we would like the red curve here, that starts at QrDev and ends on k (I don't know which formula to use to get this though) :
Yes this looks a lot better. (sorry for only replying now ^^)
Sorry to change my mind after you applied my proposition, but using a simple "min" instead of "LogSumExp" looks like it easily provides Byzantine resilience and simplifies QrUnc.
And even though the curve will be less smooth, it should be more easily interpretable.
Moreover, whereas SoftMinMean was overestimating uncertainty, LogSumExp tends to underestimate it, which may not be something that we want.
It would still not be perfect though, e.g. because QrUnc → 0 with Δn → +∞
What do you think ?
I think that Min and LogSumExp are both ok, but there's not much of a theory so far.
I liked the idea that when both sources of uncertainties are equally large, they kind of add up. But it's very heuristic at this point and far from well justified.
In any case, these are just rough estimates of uncertainties, which are currently not critical for the platform, so I don't think that this should be over-optimized for now.
If I understand correctly, the latest algorithm specifications use LogSumExp to compute QrUnc
.
@glerzing Do you want to have a go to update the implementation? And maybe adapt/create test cases in test_mehestan_primitives.py
to describe the expected behavior?
Yes, I was preparing a PR.
It may be interesting to use the LogSumExp instead of the SoftMinMean as a smooth minimum for the calculation of QrUnc. One reason is that the SoftMinMean isn't really consistent, you can have x1 < x2 and yet SoftMinMean(x1, y) > SoftMinMean(x2, y). Moreover, (I didn't understand why, but) in the paper we say that both elements of SoftMeanMin "represent upper bounds on uncertainty". So we want SoftMeanMin(x, y) < x and SoftMeanMin(x, y) < y. But this is not true, as shown in the following figure, it's the contrary.
The magenta curve represents the min and the red curves the SoftMinMean.
The LogSumExp (https://en.wikipedia.org/wiki/Smooth_maximum) looks like it fits more what we are trying to do :
Here is the code to generate and modify the images, and with the implementation of LogSumExp : https://colab.research.google.com/drive/1QcNQXFxGcu9PnLPIhyGafCH6QyNx936t?usp=sharing