Open grovesNL opened 1 week ago
I'm not sure this would have much impact. We don't actually store f64
s anywhere but the single granularity
value; they only exist otherwise in transient Bounds
that get converted into TreeBounds
once.
If you can show a real impact for a plausible use case I'm open to exploring abstracting this out somehow, but I'm wary of complicating the API without much benefit.
For the u32
use case, I'm interested in preserving integer operations (i64
would also be ok) through the stack as much as possible to keep calculations exact. It might not be a big deal with a granularity of 1 though.
For example, my original bounds start off as u32
but I cast them to f64
to insert into the tree. Depending on what happens to those bounds internally, I'd be slightly worried about small amounts of negative floating point error accumulating and causing maximum values for Bounds
to shift slightly if they're truncated (1.999999999f64 as u64
kind of thing). I could pad bounding boxes by a small amount and integer intersection on the original u32
bounds afterwards, but the padding would be an extra step vs. doing integer calculations throughout.
I don't have a great case for f32
- it would just be convenient to add it if we were generic over the number type anyway because it would allow me to avoid some casts from f32
to f64
when creating Bounds
.
For the u32 use case, I'm interested in preserving integer operations (i64 would also be ok) through the stack as much as possible to keep calculations exact
All u32
s can be represented in f64
without loss of precision.
I'd be slightly worried about small amounts of negative floating point error accumulating
The only operations that will ever be performed on f64 inputs is subtraction of the origin and division by the granularity. At most I might switch to multiplication by the inverse of the granularity. If the granularity is 1 this is obviously lossless; less obviously but still so for the default 0.01, I believe
Yeah u32
with granularity of 1 I'm not too worried about because I think it will stay lossless (at least with the operations we currently do on Bounds
). I'm not too worried about sticking to f64
for now.
Also I know casts are extremely cheap but it's possible this might be noticeable for my use case (currently bottlenecked by updates and intersection testing on the critical path). I'd have to compare performance profiles to check this though.
Currently
f64
is expected for all bounds but it would be useful to support other 32/64-bit float types (f32
) and integer types (i32
,i64
,u32
,u64
).In my case I'm mostly interested in
f32
andu32
. I can workaround it by representing them withf64
but I'd expect some benefits by keeping the original type throughout.I tested out switching over to
u32
by hand and everything seems to work fine. The only slightly weird thing is calculating new subdivisions where we might end up with a negative minimum bound, so being able to saturate differently depending on the number type would be useful (e.g.,saturating_sub
for integer types and let floats saturate to negative infinity as usual).If there's interest in this, we could probably be generic over the number type (defaulting to
f64
) with a trait for saturating it during subdivisions.