Closed jariji closed 2 months ago
At least that is the behavior @timholy intended. cf. https://github.com/JuliaMath/FixedPointNumbers.jl/pull/183, https://github.com/JuliaStats/Statistics.jl/issues/165#issue-2229522613
This is consistent with the default behavior of integer types.
julia> typeof(mean([0x80, 0x80]))
Float64
Also, while N0f8
is used for image processing, Float32
is not always accurate enough.
Note that it is not a good idea to use Float32
for the N0f8
value accumulations. N24f8
is superior in both speed and accuracy.
@timholy Can I ask the rationale for this behavior?
sum(_)
and reduce(+, _)
were supposed to return the same.julia> N0f8(.5) + N0f8(.5) |> typeof
N0f8 (alias for Normed{UInt8, 8})
julia> reduce(+, [N0f8(.5), N0f8(.5)]) |> typeof
N0f8 (alias for Normed{UInt8, 8})
julia> sum([N0f8(.5), N0f8(.5)]) |> typeof
Float64
julia> N0f8(1)/2 |> typeof
Float32
As implied above, UInt8
is a counterexample to your expectation.
julia> 0x80 + 0x80 |> typeof
UInt8
julia> reduce(+, [0x80, 0x80]) |> typeof
UInt8
julia> sum([0x80, 0x80]) |> typeof
UInt64
julia> 0xff/2 |> typeof
Float64
Of course, since fixed-point numbers are different from both integers and floating-point numbers, this package can and shall design its own rules. And, there is already a design. In other words, I think it is you who must provide the "objective" rationale.
I am planning to change the design once Statistics.jl provides a richer public API. However, I think the return types will remain as they are.
Is this the intended behavior? I expected it would round to the nearest fixed-point number.