Open arthur-bizzi opened 9 months ago
The main reason for the current behaviour is rooted in the historical development process. Initially, only Float64 was supported and was hardcoded in many places. Over time these restrictions were lifted in some places but thereby consistency was lost. There is no clear mental model, unfortunately, but I think currently you should always expect samples of contonuous distributions to be of type Float64
- when promotion happens, it's an unintended consequence of generalizing types.
There is a strong incentive though to fix these issues soonish. Therefore I think it's not worth documenting the current buggy behaviour.
I see, thank you very much. Can you think of any relatively quick fixes I could implement on my end until these changes are merged? More specifically: could I get the fix I mentioned above working for vectors? At least for basic univariate distributions.
Just ran into this same issue. For the case of Uniform
, one issue stems from this line which should use rand(rng, T)
instead of rand(rng)
. Another issue stems from the fact that eltype
seems to always return Float64
, since eltype(::Type{Uniform{T}}) where {T} = T
is not defined (like, for example, it is for Normal
). This causes this line to preallocate an Array{Float64}
.
If I should expect Float64
in general I can work around that for now, but it was unexpected.
For the case of Uniform, one issue stems from this line which should use rand(rng, T) instead of rand(rng).
Probably should be rand(rng, float(real(T)))
to ensure that it is a real floating-point type. (There's also the issue that, for dimensional types, it should strip off the dimensions. See also the discussion in https://github.com/JuliaMath/QuadGK.jl/pull/92)
This issue recently came up on discourse.
Hey. Not only do Distributions still behave differently with regards to types when generating one or multiple samples ( #1252 ), but the behavior is also inconsistent between different types:
This follows from type promotion rules. An attempted solution would be something along the lines of:
But the type of single numbers would still differ from that of arrays:
On the other hand,
rand!
seems to ignore the type parameter completely:Is there a mental model for this behavior? If so, wouldn't it be a good idea to include it in the docs? They currently contain no references whatsoever to type precision.