Closed niko-dunixi closed 11 months ago
I believe the main rationale behind it was:
Everything that touches the GPU tends to be float32
because its accuracy is "good enough" whilst needing only half the memory bandwidth. This means moving things from RAM into the GPU becomes way cheaper.
When you then want to multiply dt
within your Update
function with something (a coordinate, for example), it's easier if everything is float32
rather than converting to float64
first.
Whether this should stay or not: I don't know. I'm not familiar enough with game development. But that was the rationale 6-8 years ago.
Guesstimating here: when running on let's say 144 FPS, that is about one frame every 0.007 seconds. For the dt
to be able to distinguish between that, it needs three digits of "accuracy". For float32
that leaves roughly 4 digits of accuracy before the decimal. Meaning loss of accuracy after about 3 hours of gaming (9999 seconds). One might argue that float64
makes more sense.
Do you think this is something that maybe should be part of the Roadmap to v1.1? If nothing else, we could perhaps create benchmarks around measuring systems consuming float32
and then re-benchmark with float64
? I once heard, anecdotally, from a C++ developer that a 64 bit processor handles 64 bit lengthed primitives more efficiently because it doesn't need to perform two read or write operations and instead only performs one. That was over ten years ago though, and I never had the opportunity to test that, it could be false.
I would not mind creating that benchmark. I could probably create a contrived example, but what but what would you expect a system that needs to be performance tested to look like?
The rationale is exactly as @EtienneBruines said: since the GPU only uses float32, we just kept everything in float32. My thought there is, if you can only position things with float32 accuracy anyway, why bother working with higher accuracy float64 only to end up converting it to a float32 for things like Position, Width, etc anyway. If time were a float64, you'd end up with systems where you'd have to convert it to float32 every time anyway. What's the point of having dt float64
if I have to do var dx = speed * float32(dt)
every time I use it anyway.
That said, I could see the advantages of keeping everything (Position, etc) as float64s and only converting before sending to the GPU. At higher frame rates, the better accuracy could really be worth it, and if it benchmarks better then that's even better. I'd be for adding it to the roadmap.
Just taking a walk through the code-base to familiarize myself and I wanted to better understand why the
System
interface'sUpdate
method takes afloat32
instead of atime.Duration
.Off-handedly, I don't suppose that being more idiomatic to Go necessarily buys you anything extra but you would probably expect game-developers coming from more classic C/C++ environments to expect a
double
which would be a equivalent tofloat64
(See Game Programming Patterns: Passing Time of which you're probably already familiar, but this is reference for others who might stumble upon this question later along their own game-dev education).If it was a "just cuz" decision, that's totally valid. I'm really just curious.