Closed warner121 closed 11 years ago
Hi,
I realized the error but I couldn't find a solution to fix it. In this case, both cdf
and pdf
in the v_win
function returns 0.0
because the real value is too low. Python can't guarantee an enough precision.
However, another implementations make different result. See the below:
N(-273.092, 2.683)
and N(-75.830, 2.080)
N(NaN, 2.6826)
and N(NaN, 2.0798)
Probably the expectation is Microsoft's. Both results have similar sigma
but the C# implementation couldn't calculate correct mu
. I'll try to refer to another implementations but Microsoft didn't open the source code. I can patch to calculate only correct sigma
but mu
.
ps. Your ratings are unexpectedly too low. Are they really valid?
@warner121
https://github.com/sublee/trueskill/blob/master/trueskilltests.py#L426
I just committed 1d3208a6db245d6ca9737a2ce1ecf5f0c16d1b7e to fix a half of the problem. Now my implementation returns N(NaN, 2.683)
and N(NaN, 2.080)
instead of ZeroDivisionError
raising just like the C# implementation. But we know, it's not an eventual expectation.
Great - thanks for looking into this issue. I came across the error while evaluating the performance of different parameters in my project, and was aware these values were pretty extreme. I think the fix you have implemented is neater than the zero division. Thanks!
I changed my mind. ZeroDivisionError
is better than NaN
. If the rate()
function transforms a rating to NaN
silently, your program will save it to the database without a doubt. Then you can't recover meaningful rating.
I reopened this issue. I'm still finding a solution.
I added backend
option to TrueSkill
class to choose cdf, pdf, ppf implementation. There are 3 backends; None
(internal implementation), 'scipy'
, 'mpmath'
. If you choose 'mpmath
' backend, this problem will be disappeared. Of course you have to install mpmath
first.
>>> from trueskill import *
>>> rate_1vs1(Rating(mu=-323.263, sigma=2.965), Rating(mu=-48.441, sigma=2.190))Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "trueskill/__init__.py", line 622, in rate_1vs1
return _g().rate_1vs1(rating1, rating2, drawn, min_delta)
File "trueskill/__init__.py", line 504, in rate_1vs1
teams = self.rate([(rating1,), (rating2,)], ranks, min_delta=min_delta)
File "trueskill/__init__.py", line 416, in rate
self.run_schedule(*args)
File "trueskill/__init__.py", line 332, in run_schedule
delta = trunc_layer[0].up()
File "trueskill/factorgraph.py", line 196, in up
w = self.w_func(*args)
File "trueskill/__init__.py", line 164, in w_win
raise FloatingPointError('Cannot calculate correctly, '
FloatingPointError: Cannot calculate correctly, set backend to 'mpmath'
>>> setup(backend='mpmath')
<TrueSkill mu=25.000 sigma=8.333 beta=4.167 tau=0.083 draw_probability=10.0% backend='mpmath'>
>>> rate_1vs1(Rating(mu=-323.263, sigma=2.965), Rating(mu=-48.441, sigma=2.190))
(Rating(mu=-273.060, sigma=2.683), Rating(mu=-75.848, sigma=2.080))
Hi sublee,
I have encountered a problem with the calculation of some ratings (admittedly with some rather unusual parameter settings):