Closed ChristopherChudzicki closed 6 years ago
Note that there are two types of tolerances: absolute and relative. Your hard cutoff is setting an absolute tolerance. The default is a relative tolerance, because that's typically a better fit.
Your hard cutoff fix isn't going to work everywhere. It might be ok for formula graders, but for numerical graders, it's a disaster if the answer is 10^-13 or smaller, because you've just mandated an absolute tolerance that is orders of magnitude bigger than the answer.
but for numerical graders, it's a disaster if the answer is 10^-13 or smaller,
True.
Honestly, I probably need a better understanding of floating point numbers. I thought that since
>>> 1.12345678910111213141516 == 1.12345678910111213141517
True
it would be the case that
>>> 0.0 == 1e-20
True # This is false
Question: Do you think this issue needs a better resolution? Or is the resolution just to use absolute tolerances if you want '0'
and np.sin(np.pi)
to be the same?
So, floating point numbers are described in binary by a number 1.010101110101101 (note that it starts with a 1) called the mantissa, and then multiplied by a power of 2 exponent. So, 1e-20 is actually stored as 1e-20. Basically, you get 15 digits of precision including the first.
The resolution to this is simply to use absolute tolerances.
We may like to document this better though!
This PR resolves the following issue:
by changing
within_tolerance
to accept anything withinhard_tolerance
of the answer (default:1e-12
).Before this PR, authors could work around this issue by setting a numerical tolerance for the grader. But consider
eigenvector_comparer
, https://github.com/mitodl/mitx-grading-library/pull/106/files#diff-8d2928c7449112fa868ac7f32db680d3R179)