Open StefanKarpinski opened 4 years ago
There's a question of what to do about -0.0
under this definition. The limit theory of negative zero might suggest that mod(-0.0, 360)
should be prevfloat(360)
, but that's a bit weird, so maybe not. Treating -0.0
the same as 0.0
also seems reasonable to me.
Now that rem(x,y,RoundNearest)
works, we should encourage users to use that where possible, since it is exact, and has the same property of reducing to an interval of length y
(in this case [-y/2, y/2]
Why not just have the answer for -1e20 be 0? In general, it seems like rounding up to the modules should be allowed, but then the result should be 0
That's an option too and would be compatible with rounding to nearest, but it has the strange property that the cut point is -eps(y)/2
which is a really weird place for it.
I think the third argument of rem
(i.e. rounding mode) seems to specify only the mode for the division. (Edit: It is clear from the case of rem(1e-20, -360, RoundDown)
that the final rounding is independent of the RoundDown
.) We have no way to reliably control the rounding mode for other operations, e.g. addition, subtraction and multiplication.
So, mod(-1e-20, 360) == 360
is somewhat reasonable (with the default mode RoundNearest
), but mod(-1e-20, 360) == 0
is not reasonable.
mod(-1e-20, 360) == 360
is somewhat reasonable
Not if the premise of the function is that the result should be in the interval [0, 360)
, which, imo very much is the premise of the function.
mod(-1e-20, 360) == 0
is not reasonable
If 360
is a reasonable answer, given that 0 == 360 (mod 360)
how can 0
be an unreasonable answer? That seems to violate the basic logic of modular arithmetic.
If
360
is a reasonable answer, given that0 == 360 (mod 360)
how can0
be an unreasonable answer?
In that regard, 0
is a reasonable answer and I do not strongly oppose the wraparound behavior. However, it is not clear when the congruence 0 == 360 (mod 360)
should be applied. I feel slightly uncomfortable with the "extra" operation after the final rounding.
I don't think [0, y)
is important property because floating-point numbers have finite precision in the first place. If the property is important, we only need to explicitly add an if
statement.
In any case I am against to force the rounding mode.
That the result is in [0, y)
seems to me to be the essential property of a mod
function.
I meant that the "correct rounding" in floating-point operations might be "more" fundamental than the expected properties of the mod
.
What about adding a new function mod0
?
mod0(x::T, y::T) where {T<:Real} = (m = mod(x, y); ifelse(m == y, zero(m), m))
Is there actually a legitimate use case for mod with floating point arguments?
Is there actually a legitimate use case for mod with floating point arguments?
Angles, like longitudes?
Angles would be better handled via rem(x,360, RoundNearest)
Oh I see, I did not even know about all the rounding options... So,
rem(x, 360.0, RoundNearest)
, and rem(x, 360.0, RoundDown)
! Great, thanks! I'll use that from now on :)
EDIT: Just to say that the current issue also applies to
julia> rem(-1e-15, 360, RoundDown)
360.0
I agree with you on that point, @simonbyrne. However, in some use cases the range of [0, 360)
is required.
For example, ColorTypes.jl defines the hue range as [0,360)
. I reported an issue related to this issue.
As a matter of fact, it is difficult to guarantee the half-open interval even if the mod
specification is changed. For example, a type conversion from Float32
to Float16
can easily cause rounding errors.
Another use case is weights of interpolation. In this case, [-0.5, -0.5]
and [0, 1]
generally have different meanings.
Thanks @kimikage, those are useful examples. This prompts the question though of what the correct answers would be in those cases?
For hue, I would say that for mod(-1e-20,360.0)
, 0.0
would be more appropriate than prevfloat(-360.0)
, since
julia> abs(-1e-20 - (prevfloat(360.0) - 360.0)) < abs(-1e-20 - 0.0)
false
Not sure about interpolation: how would the mod
be used in that context?
My opinion is that we should do the "correct rounding" (in terms of IEEE 754) according to the rounding mode. We have no way to properly specify the rounding mode for most operations. Also, changing the rounding mode often results in non-negligible costs.
mod(-1e-20,360.0)
should be equal to 359.99999999999999999999
. (The rounding error of -1e-20
is ignored here for simplicity.) The result is 360.0
or prevfloat(360.0)
depending on the rounding mode, and using RoundNearest
it is 360.0
.
That was the argument for the current behaviour, but I will note that IEEE 754 doesn't define this specific operation (presumably intentionally). The only similar operation defined is remainder
, which corresponds to rem(x,y,RoundNearest)
I don't think there is any ambiguity in the mathematical (i.e. with infinite precision) definition of mod
, except for the sign.
I'd like to pick a mathematical meaning for the mod
function and then make sure it meets that definition. @kimikage, your argument seems to stem from the definition of the function—i.e. the function is defined this way and therefore rounding this way is correct. What mathematical meaning (without reference to a sequence of rounding operations) would lead us to mod(-10e-20, 360.0)
being 360.0
? We have to continue supporting rounding modes, but it seems like they should affect the total outcome, not necessarily each intermediate step, especially since that's leading to incoherent overall behavior for the function.
I would propose that no matter how we happen to implement it, mod(x, y)
should produce a result in [0, y)
for positive y
and (y, 0]
for negative y
. It makes sense for the rounding mode to affect how values are mapped into "buckets" within that range, but not whether or not you can get the exact value y
as an answer. Perhaps people should prefer to use rem(x, y, RoundNearest), but there are still cases where people want a result in
[0, y)` and this seems like the right function for that.
The other (conflicting) requirement is to satisfy the property that
x = y * fld(x,y) + mod(x,y)
i.e. the function is defined this way and therefore rounding this way is correct.
The "correct rounding" is an IEEE 754 technical term. I haven't mentioned the correctness of the "way".
What mathematical without reference to a sequence of rounding operations would lead us to
mod(-10e-20, 360.0)
being360.0
?
I just said:
mod(-1e-20,360.0)
should be equal to359.99999999999999999999
.
359.999999999999999999990000000000000000548...
is not (always) 360.0
. The equation above by @simonbyrne (the mathematical definition of mod
I think of) requires that the result should be around 360
rather than around 0
. So, I said:
mod(-1e-20, 360) == 0
is not reasonable.
and proposed the new (i.e. separate) function mod0
.
To ensure that the result is not equal to y
, we must include it in the definition of mod
(as proposed in the OP), but that means that the definition implicitly depends on the implementations of floating point types. (Edit: As a practical matter, I think there is no inexpensive way to satisfy it.)
Of course, the following method can be considered, but this is very ugly.
new_mod(x::T, y::T) where {T<:AbstractFloat} = (m = mod(x, y); ifelse(m == y, prevfloat(y), m))
Ok, making the case from the perspective of the x == y*f + m
equation makes sense to me. However, note that we do not meet this requirement currently:
julia> x, y = -1e-20, 360.0
(-1.0e-20, 360.0)
julia> f = fld(x, y)
-1.0
julia> m = mod(x, y)
360.0
julia> x, y*f + m
(-1.0e-20, 0.0)
A more forgiving way to express this is x - m == y*f
which we do currently satisfy:
julia> x - m, y*f
(-360.0, -360.0)
However, note that if we do what I suggested in the title of the issue and round down when computing mod(x, y)
then we would get m = prevfloat(360.0)
which leads to an apparent violation:
julia> x - m, y*f
(-359.99999999999994, -360.0)
Except that I would argue that the entire relation should be checked in the same rounding mode, which would lead to x - m
being exactly -360 as required. So it seems like the core issue is that the mod
computation should use the downward rounding mode by default, which causes all values to end up in the expected [0, y)
range.
So it seems like the core issue is that the
mod
computation should use the downward rounding mode by default
Strictly speaking, it is like RoundToZero
with the special handling for +0
/-0
, rather than RoundDown
. I think the special handling is OK, since the operations for +0
/-0
in terms of floating-point numbers are not "mathematically" well-defined,
In any case I am against to force the rounding mode.
The reasons are:
[0, y)
or (y, 0]
, and is not an essential property which mod
"must" have.In other words, if we can solve the problem of 2., I agree with the proposal of the OP. For example, many instructions of AVX-512 support the embedded rounding control, but the current Julia does not have an API to handle it.
I don't think
[0, y)
is important property because floating-point numbers have finite precision in the first place.
Most floating-point types can represent 360
exactly, but not π
and so on. The Float64(π)
is less than π
, and Float16(ℯ)
is greater than ℯ
.
There are many codes running with/without the correct handling of the end point y
, It is nonsense to add a conditional branch for backward compatibility. We can handle y
properly in a favorite way with one conditional branch, e.g. mod0
.
~Another option (which I do not prefer) is to newly provide the fmod
C function.~ (see below)
The importance or priority of the [0, y)
range is a matter of values, so I have nothing more to say.
Another option (which I do not prefer) is to newly provide the
fmod
C function.
We already have that available as rem
(which is equivalent to x - y*div(x,y)
).
Do you mean remainder
?
No, the Julia rem
function (see the link).
So, to summarise, there are a couple of conflicting requirements:
mod(x,y) == x - y*fld(x,y)
: since this is impossible in the context of floating point, then we would like it to minimize the error:
a. this implies that mod
and fld
should have discontinuities for the same values of x
and y
.mod(x,y) == mod(mod(x,y),y)
0 <= mod(x,y) < y
if y > 0
.For mod(-1e-20,360.0)
:
a. 360.0
(current behaviour) violates 2 & 3
b. 0.0
violates 1 (since the discontinuity is in the wrong place)
c. prevfloat(360.0)
satisfies 1 subject to the constraint in 3
d. -1e-20
violates 1 & 3.
c. could be arrived at in two ways that I can think of
(i) compute m = x - fld(x,y)*y
rounded to nearest; if m == y
then m = prevfloat(m)
(ii) compute m = x - fld(x,y)*y
rounded to 0.
Note that the only difference would be for values of x
in the interval (-y/2,0)
, as these are the only cases where rounding occurs.
(ii) is conceptually more elegant and consistent (i.e. the "domain of rounding" for each floating point value m
is [m, nextfloat(m))
)
Currently we have this unfortunate behavior (pointed out on Slack by @briochemc):
There's a couple of issues:
mod(x, y)
should be in the interval[0, y)
butmod(-1e-20, 360) == 360
which violates that invariant.x -> mod(x, y)
should be idempotent, but it isn't here (because of 1).Which brings me to asking what should the definition of
mod
be for floats? I would propose thatmod(x, y)
should compute the float in[0, y)
that is closest to the true value ofmod(x, y)
. That implies thatmod(-1e-20, 360)
should beprevfloat(360)
, regardless of rounding mode. Moreover, I'd argue that it probably only makes sense to round down sincemod
andfld
are linked. If we always round down, since the true value ofmod(x, y)
is in[0, y)
the rounded result will always be in[0, y)
as well, so the function will behave as desired.cc @simonbyrne, @stevengj