Closed cossio closed 2 years ago
Merging #78 (f9d70df) into master (f46977e) will decrease coverage by
1.04%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## master #78 +/- ##
==========================================
- Coverage 95.74% 94.70% -1.05%
==========================================
Files 2 2
Lines 141 151 +10
==========================================
+ Hits 135 143 +8
- Misses 6 8 +2
Impacted Files | Coverage Δ | |
---|---|---|
src/api.jl | 66.66% <0.00%> (+12.12%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update f46977e...f9d70df. Read the comment docs.
It's not discontinuous at integers in the tests here since we use a floating point number as second argument.
Here is a plot of mod
.
using GLMakie
xs = 0:0.001:10; lines(xs, mod.(xs, 2))
It is discontinuous in the first argument, with a discontinuity every 2
units (or whatever the second argument is).
It so happens that the random (float) value of the first argument used in the (previously failing) test is very near such a point, and the test fails. I thin relying on the ability of the finite difference algorithm near a singularity like this is not the right approach here.
If I'm not mistaken the floating point value is the second argument of mod
. So the example you plotted does not correspond to the tests.
I thin relying on the ability of the finite difference algorithm near a singularity like this is not the right approach here.
I assume the problem is that the second argument is too close to 0. A better FD algorithm would already help (and be helpful and more accurate in general) but probably also shifting values to >= 0.5 might help in this specific test.
Maybe we should just use FiniteDifferences.jl instead.
I have tried with FiniteDifferences.jl
and it doesn't help. You still get the blowed up differentials.
For reference, the test that's failing is at mod(2, 0.40000030646823104)
. This hits a discontinuity.
using GLMakie
ys = 0.3:0.0001:0.5
lines(ys, mod.(2, ys))
but probably also shifting values to >= 0.5 might help in this specific test
Did you try this? This is a plot of mod(2, y)
as a function of y
.
Maybe I don't understand but I don't see why the shift would help here. I mean, the discontinuities get rarer, but there is still a probability of hitting them. And eventually the function is just constant so the test becomes trivial.
Also, that's just the case of mod(2, y)
. If you try mod(8, y)
(note that in the tests here, the first argument can reach up to 10), the plot is this:
So in this case you would need a larger shift.
I mean, the discontinuities get rarer, but there is still a probability of hitting them.
It's the main point that it becomes less likely - it means that test failures are less likely.
If necessary, we can also restrict the first argument to smaller values. But intuitively I would assume that it becomes sufficiently unlikely by changing the second argument.
mod(x, y)
has a discontinuity whenever x / y
is an integer. As @devmotion noted this is more likely the closer x,y
are to (0,0)
.
Seems much easier to pick values where you are certain there will be no discontinuities nearby, instead of wondering about probabilities and hoping the seed works out.
Maybe x, y = 13+rand(), 5+rand()
?
Seems much easier to pick values where you are certain there will be no discontinuities nearby, instead of wondering about probabilities and hoping the seed works out.
Maybe
x, y = 13+rand(), 5+rand()
?
Yes, I agree. I just pushed this change :smile: . Can you take a look at the current code?
Maybe
x, y = 13+rand(), 5+rand()
?
Actually I was using different values, which made the test trivial as @devmotion noticed.
With the values suggested by @mcabbott , we guarantee x / y
is not an integer:
while also guaranteeing that the test is not trivial.
CI is failing on Julia v1.7.
The failure is ocurring when testing the rule for
Base.mod
. This function is discontinuous at integer values, so actually I don't understand why tests were passing in previous versions of Julia. In Julia v1.7 the random stream changed and it could be that the issue was just by chance not erroring before, but now it is.So here I just test
mod
separately, on non-integer arguments.