Open JeffreySarnoff opened 3 years ago
Good question. The original PR may be of interest https://github.com/SciML/Static.jl/pull/6. A comparable StaticFloat
seems more natural to me but I rarely use it static floats, so I might just be unaware of some hurdle.
Yeah, more thought could certainly be used here.
My motivation for StaticFloat(64)
was as a means of calculating blocking parameters at compile time in Octavian.jl.
Not sure if there's actually any other use case.
Octavian won't be using them forever, so maybe it was a bad idea to move the code here if there aren't any other use cases.
We could define
Base.promote_rule(::Type{<:StaticFloat64}, ::Type{Float32}) = Float32
so that runtime computations happen using Float32
.
Occasionally it's helpful to have a static float type for operations on StaticInt
, like division. If there was a great need to adapt to 32 bit machines then we could change the definition to be this:
@static if Int <: Int64
const Float = Float64
else
const Float = Float32
end
struct StaticFloat{N} <: AbstractFloat
StaticFloat{N}() where {N} = new{N::Float}()
end
I'm a little nervous about this type of solution running into a bunch of unexpected issues and I'm not sure what benefit this would have beyond having consistency with StaticInt
.
But if there's a strong case for changing this I'd be happy to review the PR.
We could define
Base.promote_rule(::Type{<:StaticFloat64}, ::Type{Float32}) = Float32
so that runtime computations happen using
Float32
.
I thought we already promoted to Float32
Maybe I am misreading the intent of this package ...
I thought there was some general benefit to using static(x)
rather than x
when coding functions that participate in some kinds of SciML support. If that interpretation is correct, would we prefer both StaticFloat64
and StaticFloat32
(along with StaticInt64
and StaticInt32
)?
My motivation for StaticFloat(64) was as a means of calculating blocking parameters at compile time in Octavian.jl. Not sure if there's actually any other use case.
If that is true .. is this package (at least the float part) all about supporting Octavian.jl and probably not relevant to other packages that do not have blocking logic?
The benefits of this package aren't SciML or Octavian specific. A general rule for when this package is useful would be when you would need to use Val
but also need the compile time known value to act like its dynamic counterparts. This is used extensively in ArrayInterface
if you want some examples.
The reason we don't have distinct Int32
and Int64
types but we have Float64
is because base Julia assumes the type of 1
based on the system but 1.0
is always Float64
. Byte sizes shouldn't affect performance on different systems because if the static value is preserved in the computation (e.g., static(1) * static(2)
) then it is performed at compile time.
using any of these comparisons (==
, !=
, >=
, <=
, >
, <
) with two StaticBool
values results in a StaticBool
. Using them with two StaticInt
values or with two StaticFloat64
values results in a Bool
.
Is this the desired behavior? If it is, why the discrepancy?
If there was a great need to adapt to 32 bit machines
32 bit machines have 32 bit pointers, but should (hopefully) still support Float64
arithmetic just fine.
And of course, you can run 32 bit Julia binaries on your 64 bit machine, which is useful for debugging.
The main limitation 32 bit x86 machines/programs have is that they can only address 8 registers per register type, which makes register-hungry code slow. But otherwise, it can make use of your CPU's capabilities, such as AVX(2/512), just fine.
Because static operations are happening at compile time, and they can promote to smaller types, I'd think if anything (especially for floating point), that we'd want to move to larger precisions if possible (and then demote when interacting with dynamic values).
Similarly, unless you're counting on overflow/underflow, I don't see why we'd want Int32
. I don't think it'd be unreasonable to use Int128
for StaticInt
.
I thought we already promoted to
Float32
You're correct.
I use StaticBool
and especially StaticInt
in a lot of my code.
The asymmetry in comparison operators does seem unpleasant.
It is deliberate that the comparison operators return Bool
for StaticInt
and StaticFloat64
, to avoid TypeError: non-boolean (True/False) used in boolean context
errors. That is, we want to be able to pass StaticInt
and StaticFloat64
s as arguments ot functions in place of the dynamic variants and have the code still work as intended.
Hence, why we have separate comparison operators for returning StaticBool
s when we deliberately want that behavior, e.g. for dispatch.
Maybe the operators should also return Bool
s when used on StaticBool
s?
The package defines alternate comparison functions alt_fn
for each Base comparison op
. Here is a reexpression that restores symmetry.
#=
alternative comparison operators
| Base op | alt fn |
|---------|--------|
| == | eq |
| != | ne |
| <= | le |
| < | lt |
| >= | ge |
| > | gt |
for most signatures alt_fn aliases the Base op
alt_fn(x, y) = op(x, y)
iff each of x, y are <: Union{<static_types>}
alt_fn(x, y} = StaticBool(op(dynamic(x), dynamic(y)))
=#
for (alt, op) in [(:(eq), :(==)), (:(ne), :(!=)), (:(le), :(<=)),
(:(lt), :(<)), (:(ge), :(>=)), (:(gt), :(<))]
@eval begin
$alt(x::X, y::Y) where {X,Y} =
ifelse(is_static(X) & is_static(Y), static, identity)(Base.$op(x,y))
$alt(x::X) where {X} = Base.Fix2($alt, x)
end
end
If the static type structs were defined more similarly, it would be easier to reason about them. Is this sketched approach operationally correct (after any suggestions)?
const DynamicInt = Int128
"""
StaticInt(N) -> StaticInt{DynamicInteger(N)}()
Construct a *static* `value::Signed` that wraps `DynamicInt(value)`.
Use `StaticInt(N)` instead of `Val(N)` when you want it to behave like a number.
"""
struct StaticInt{N} <: Integer
StaticInt{N}() where {N<:Signed} = let DI = DynamicInt(N)
new{DI::DynamicInt}()
end
end
StaticInt(x::StaticInt) = x
StaticInt(x::Signed) = StaticInt{x}()
StaticInt(x::StaticBool{X}) where {X} = StaticInt(dynamic(x))
const DynamicFloat = Float64
"""
StaticFloat(N) -> StaticFloat{DynamicFloat(N)}()
Construct a *static* `value::AbstractFloat` that wraps `DynamicFloat(value)`.
Use `StaticFloat(N)` instead of `Val(N)` when you want it to behave like a number.
"""
struct StaticFloat{N} <: AbstractFloat
StaticFloat{N}() where {N<:AbstractFLoat} = let DF = DynamicFloat(N)
new{DF::DynamicFloat}()
end
end
StaticFloat(x::StaticFloat) = x
StaticFloat(x::AbstractFloat) = StaticInt{x}()
StaticFloat(x::StaticBool{X}) where {X} = StaticFloat(DynamicFloat(dynamic(x))
The package defines alternate comparison functions alt_fn for each Base comparison op . Here is a reexpression that restores symmetry.
Makes sense to me to have the alternate comparison operates the same across the board and just return Bool
on standard ones.
If the static type structs were defined more similarly, it would be easier to reason about them. Is this sketched approach operationally correct (after any suggestions)?
My only concern with DynamicInt = Int128
is that we use it for ArrayInterface.OptionallyStaticUnitRange
. Will we need to start adding checks for overflows?
indeed .. the logic for ArrayInterface.canonicalize
with ArrayInterface.canonical_convert
would be out of sync; it appears some uses of the latter would want sizeof(dynamic(static(5))) == sizeof(5)
.
(a) Why is there no need for
StaticFloat32
orStaticInt32
?(b) Why is StaticInt idempotency & etc. defined
while
StaticFloat64
idempotentcy & etc is defined