Closed aaronfranke closed 2 years ago
As a workaround you can use this plugin I made: origin_shifter.zip Add a OriginShifter node to the node you want to keep centered. Set world_node to the root of everything that should be shifted.
Thank you! This is essential for @ivoyager going forward. We do origin shifting, dynamic adjustment of camera near
and far
, and other tricks. As described for Kerbal Space Program, however, it becomes a real monster trying to find and patch parts that don't work due to low precision. (But anyone is welcome to come over and ask me how to do it, in any case.)
I dont know for sure if this has something to do with this issue, but in my case, working with PS1 game files, ps1 hardware has a limited float precision so levels have to be integer size, lot of variables too, so when importing to godot and trying to mimic game movements, using kinematic's move_and_slide function is a very precise float, what about a ProjectSettings that user can set which precision he wants to its game? Like double_precision: (slider from 0...X)
@nonunknown This can't be a slider in project settings due to it being a compile-time setting.
Has there been any progress on this issue?
@MCrafterzz There is a PR Waiting to be reviewed for 4.0. You can find it here.
Since Godot 4.0 is going to focus on 64-bit devices first and foremost, why not enable double precision by default and provide a way to build the engine with single precision instead? I'm not sure about the impact on Android devices though. (I presume popular iOS devices will be powerful enough in 2021 for this not to be an issue there.)
If the performance difference is determined to be very small, we could indeed do that, yes.
I'm most concerned about phones, since 32-bit ARM chips only work with single-precision, which would mean the performance on 32-bit ARM devices would be very slow. 64-bit ARM can do doubles fine as far as I can tell, but I haven't actually tested/benchmarked it.
On the CPU, for the most part, doubles are equally as fast as single-precision floats. The x86 architecture does not have circuits for single-precision floats, the FPU elevates all floating-point types to an 80-bit extended precision format internally, and truncates the result.
This is a bit misleading. Compilers on platforms that have minimum support for SIMD will preferentially use SIMD instructions to replace ye olde style floating point calculations. Try compiling a simple program with optimization and examine the assembly.
In practice equally or more relevant will be the extra memory, because cache hits / memory bandwidth has become more an more a bottleneck in modern CPUs. Sometimes dealing with twice the memory can simply take twice as long.
The speed of the operation / memory access is why even modern GPUs will still use 32 bit operations on 64 bit systems. Going further, mobile GPUs will use lower precision than 32 bit for the same reasons, as low as 10 bits.
That said I'm not against having a float / double compilation switch, especially if it is easy to do, it might be wise to temper expectations in terms of the relative performance of the two methods. Of course the real world effect depends on how much floating point calculations are bottlenecks in any particular game. You may find that in practice that only say 5% of CPU time is spent in floating point calculations, in which case a doubling of their time taken will only result in a 5% drop in performance.
For many reasons (including this) engines often use alternative approaches such as shifting the world origin: https://docs.unrealengine.com/en-US/Engine/LevelStreaming/WorldBrowser/index.html
This can include approaches such as splitting the world into chunks (often addressed by integers) and using floats to reference the local area within these chunks.
Another approach is the use of fixed point. But these alternatives are considerably more involved that flipping from float to double.
Also note that aside from flipping float to double there may be some non-obvious complications, such as the values used for epsilons, and alignment, and files.
I did mention SIMD in my post, SIMD instructions for processing whole Vector3 or Quats at a time need 256-bit instructions, which means Intel CPUs from 2013 or later, and AMD CPUs from 2015 or later. See also #290. Note that I'm not really concerned about cases of performing the same operation to more than one vector at once, since I don't think such cases are common in transform/physics math (though extremely common in your test case of arrays). By the way, your example output doesn't match the example code ("timing SIMD" vs "timing ranged").
some non-obvious complications, such as the values used for epsilons, and alignment.
Single-precision epsilons can still work for doubles, only breaking in the extreme cases. Alignment changing (and therefore the entire ABI) is expected.
Hi all, seeing as this thread seems to be not entirely technically focused, I hope this comment isn't out of the scope of discussion for proposals. If it is, please do delete and I will raise it somewhere more appropriate. As proposals seem to be guided by user interest, I figured it might worthwhile talking about how as a hobbyist gamedev, I'd find the inclusion of double-precision floats beneficial, or at the very least appealing.
A lot of discussion on the issue centres around how any projects that would need it would inevitably be big open-world games or simulations, both of which are outwith the scope of indie developers; and that any such projects would be better off using world segmentation and co-ordinate shifting. While I understand both of these arguments, I feel that they somewhat miss the point of why double-precision floats would be beneficial.
Firstly, the assumption that large open worlds cannot be accomplished by indie devs is not true. There are already examples of extremely large environments made by small teams on the market that have been successful - For instance Kenshi, Outward, Astroneer, Space Engineers and the aforementioned Kerbal Space Program. While double-precision floats are not going to resolve the inherent logistical issues in making large open worlds, not having to use custom code or workarounds to achieve them is only going to be of benefit.
Secondly, simply not having to worry about world size, co-ordinate precision, jitter, z-fighting, or other related issues are benefits that aren't limited to open-world titles. Such issues can happen even in smaller environments, and giving developers the option to eliminate them, at cost to performance, may be worth the tradeoff.
My personal interest does not so much come from wanting to create huge, sprawling, AAA open world sandboxes, but simply being able to make large environments if I choose to do so without the added complications of having to endure precision falloff, or having to roll my own means to mitigate the issues this introduces. Seeing @reduz 's recent work towards scalable realtime global illumination for big open-world environments suggests there's at least developer intent to make Godot suitable for such games. As Godot focuses on solutions that are easy to manage by the end-user first and performance second, double-precision floats seem like a reasonable fit for this philosophy, provided it doesn't preclude using single-precision if need be.
@Ophiolith We will definitely support double-precision floats in 4.0, but it will most likely be a compile-time option to avoid performance issues on slower hardware (especially mobile/Web platforms).
@Ophiolith We will definitely support double-precision floats in 4.0, but it will most likely be a compile-time option to avoid performance issues on slower hardware (especially mobile/Web platforms).
Hmm this confuses me because looking at the pr it doesn't seam like reduz really want double precision floats. Also how would the compile time option work?
Hmm this confuses me because looking at the pr it doesn't seam like reduz really want double precision floats.
He told me on IRC he's interested in having Godot support double-precision floats, but not by default for performance reasons.
Also how would the compile time option work?
As far as I know, it's more or less a matter of fixing REAL_T_IS_DOUBLE
so it can be used reliably. This define already exists but it currently doesn't work as intended.
Hmm this confuses me because looking at the pr it doesn't seam like reduz really want double precision floats.
He told me on IRC he's interested in having Godot support double-precision floats, but not by default for performance reasons.
Also how would the compile time option work?
As far as I know, it's more or less a matter of fixing
REAL_T_IS_DOUBLE
so it can be used reliably. This define already exists but it currently doesn't work as intended.
Ok so there won't be a option in project preferneces/settings? You would have to enable it from code?
@MCrafterzz You would have to recompile the editor and project templates for this, as it's not technically possible to swap C++ types without recompiling the engine.
Since it's a relatively advanced use case, I don't think this will be too much of an issue. Still, nothing prevents a third party from distributing pre-built binaries with double precision support.
@MCrafterzz You would have to recompile the editor and project templates for this, as it's not technically possible to swap C++ types without recompiling the engine.
Since it's a relatively advanced use case, I don't think this will be too much of an issue. Still, nothing prevents a third party from distributing pre-built binaries with double precision support.
Ok thanks for answering my questions. As long as it's clearly documented it shouldn't be a problem :D
@Calinou Thank you for the response! I wasn't aware there was already an active push for a compile time option for this as part of 4.0. I'll be sure to test and give feedback when it's up and running :)
Wouldn't it be easier for the user to implement sth like example Vector3-double and Vector3-single and the like and have an option in project settings that is default at single? I don't know much about the inner workings of the engine so not sure if it is possible to make something like that and get the benefits of both worlds.. I mean my idea is to like have 2 classes with the same name just from 2 different modules, only the engine would load one module or the other depending on the setting.
@mrjustaguy As stated above, it's not technically possible to swap C++ types without recompiling the engine.
@Calinou said:
Since it's a relatively advanced use case, I don't think this will be too much of an issue. Still, nothing prevents a third party from distributing pre-built binaries with double precision support.
Or similar to how currently you can choose between mono build of the engine or leaner gdscript only engine. Though I'd expect this to only happen if the Mono build becomes the official build and the other one is dropped so that there would be capacity to maintain an extra build. That is just speculation on my part though.
Though I'd expect this to only happen if the Mono build becomes the official build and the other one is dropped so that there would be capacity to maintain an extra build.
Non-Mono builds are here to stay, they're smaller and don't require any system dependencies to be used (especially on Windows) :slightly_smiling_face:
@Megalomaniak Well, the build matrix isn't as big as you'd think, because there isn't much of a point of making 32-bit builds with doubles. So on Windows there would be 6 builds: 32-bit Single, 32-bit Single Mono, 64-bit Single, 64-bit Single Mono, 64-bit Double, 64-bit Double Mono, and on Mac/Linux there would be 4: Single, Single Mono, Double, Double Mono (since Mac/Linux official builds will be 64-bit-only for 4.0).
EDIT: It has since been pointed out to me that we will need to support 32-bit with doubles on WebAssembly if we want doubles on WebAssembly because there is no 64-bit WebAssembly.
EDIT: Mac builds would have 4 binaries but 8 build configurations if you count both Intel x86 and Apple Silicon ARM (Single x86, Single Mono x86, Double x86, Double Mono x86, Single ARM, Single Mono ARM, Double ARM, Double Mono ARM). If ARM Linux or ARM Windows ever take off then this would increase the combinations there too.
Right, but I meant pre-built double precision builds which far as I can tell from this issue/topic aren't going to be a thing? Just the support to build yourself with a compile time flag. I suppose there is always others building for those who don't want to deal with it themselves though.
I think I'd honestly be okay with double-precision being relegated to a compile-time flag. It seems to be something the vast majority of indie developers would not have much use for, and may just end up with a lot of users downloading it by mistake and wondering why their memory footprint is insane and their game not being able to run on mobile devices.
@Megalomaniak I'm not ruling out official double precision builds, but we'll cross that bridge when we come to it (and if the demand is high enough to make it worth it).
@Ophiolith It wouldn't actually increase the memory footprint very much, because only a small fraction of memory is used for floats. A lot of the memory is used for other things such as textures and other assets. The main concern is with reduced performance on old devices, especially 32-bit devices, but also older 64-bit devices without good vector instruction sets.
I can see use cases of double precision in indie games, but yea there aren't plenty.. Mainly for those that use procedural generation to make gigantic maps, and well, space games could benefit from it too in some cases..
I just googled something and this thread came up. I wanted to add my comment for what it's worth. I currently use DirectX only because no freely available game engines support double precision coordinates. I know about all the re-basing tricks. However for planet sized stuff that use chunking, you almost have to support some sort of variable chunk sizes because as you zoom out away from the planet, if you don't, your chunks will become tiny, like a couple of triangles, and you will need millions of chunks. This means that a single chunk can and does easily become larger than the range/precision of regular floats. I have such a chunking system for my terrain, but it uses double precision and I don't really want to go though the pain of trying to implement yet more re-basing hacks to avoid using double.
In addition, it's just overall convenient to have a planet wide coordinate system with the origin in the middle. I use still use re-basing but on a larger larger scale. For instance a solar system has it's origin at the sun, or in the case of binary system, somewhere between suns. That's not such a big deal because there is no contiguous land between planets and you don't really need high precision there. But for a single world having a unified coordinates is super nice.
To be clear I'm only talking about CPU side. You still need to convert to float once stuff gets to the GPU. You just need a good LOD system to avoid Z-fighting.
Is it still the case and will it be implemented as the part of the new Godot 4 physics system? @reduz @pouleyKetchoupp
Is it still the case
Yes.
will it be implemented as the part of the new Godot 4 physics system?
We don't know for certain whether https://github.com/godotengine/godot/pull/21922 will be merged for 4.0.
This is not really related, but I don't know a better place to put this: Currently in the 3D view, the editor camera can't zoom out further than 10,000 units (as defined here). This may be limiting in certain scenarios, e.g. I'm making large planets that are thousands of units in radius and the camera limit is bothering me. Is there a way to make this configurable without recompiling the engine (that'd be the easiest workaround for me right now).
Anyway, I'm really looking forwards to double precision support, be it an official build or just a compile time option, compiling the engine is easy enough. I wonder if it will land in 4.0 or at least 4.1…
This is not really related, but I don't know a better place to put this: Currently in the 3D view, the editor camera can't zoom out further than 10,000 units (as defined here). This may be limiting in certain scenarios, e.g. I'm making large planets that are thousands of units in radius and the camera limit is bothering me. Is there a way to make this configurable without recompiling the engine (that'd be the easiest workaround for me right now).
The limit is hardcoded right now, so you have to recompile the editor to get rid of it. (You don't need to recompile export templates since this is editor-only.)
Also, you'll have to increase the editor camera's Far property above 10000 to be able to zoom out further due to https://github.com/godotengine/godot/pull/39743 (which is in effect since 3.3). To do so, at the top of the 3D editor viewport, use View > Settings... and adjust View Z-Far.
@Hoimar For some context, the original intent was a) for that change to be 4.0-only, and b) for it to be paired with the dynamic infinite 3D grid, since an infinite grid tempts an infinite zoom which crashes the editor (although a zoom to millions won't). Since the dynamic infinite 3D grid was backported, I suggested that the zoom limits be backported too.
On one hand, zooming to such extreme values is misleading without double support. On the other hand, 3.x will never get double support, so perhaps we should allow it even if it's misleading. On the first hand again, it's not too hard to work around the issue, you can always compile your own version, and depending on your use case you can probably just temporarily scale down your world when using the editor, or permanently scale it down and declare that 1 unit = 1 km (or 1 Mm etc) (usually I always suggest a 1 unit = 1 meter scale, but this is less feasible for trying to make planets in 3.x).
So I don't know what to do, if it should be changed, and if so, what the new limit should be. Here is the line that would need to be changed. As far as I'm concerned, pretty much any limit between 10k and a billion or so would make sense. One number worthy of mention would be the point where floats lose integer precision, approximately 10 million (the existing 10k limit was chosen for approximately 0.001 precision, at 10 million the precision is only 1 unit).
I just wanted to add in a bit for context on why this would and will be an incredible feature. I'm another one of those Game Devs working on a space game who initially set out to use UE4 then Unity then Godot but because my simulation is meant to give an accurate depiction of the N-Body Problem none would be sufficient. In my experimenting I've found that calculating something like the mass of the sun does actually work because its just a big number to the left of the decimal, which in all engines seems to work fine. It's when you want to calculate the amount of acceleration two bodies might cause each other based on their Hill Sphere that you run into issues. Enter, the Gravitational Constant, thanks Newton. 6.674 x 10^-11 .
The formula for Universal Gravitation, F = G (m1 m2) / r^2 where G is the gravitational constant. 0.000000000006674 is a number that engines using floats like to round to 0. And any arithmetic using such a number usually ends up being 0.
Sure you can fake it with all kinds of tricky math, but why fake gravity when doubles will do just fine?
@Tronological: Wow, thanks for pointing out why my space game loved to spit out nonsensical results for orbit duration parameters... it never occurred to me that it might round the G down to 0...
@Tronological: Wow, thanks for pointing out why my space game loved to spit out nonsensical results for orbit duration parameters... it never occurred to me that it might round the G down to 0...
Two weeks of debugging before I realized what was going on, then a few days of testing in all the engines to notice this and realize that the only way to simulate gravity in any game engine is to fake it unless they use Double-Precision.
And funny enough, I found the error when trying to build simulations of my orbits using a line to display the path through space and it never worked. For some reason a similar function will in fact make the object move like an orbit in the game and I found that's because of how floats are actually calculated in the CPU, the problem occurs when you intend to output the number. It was like Schrodinger's cat, as long as you didn't try to observe the float it would be fine.
Well fine-ish, knowing that they weren't being used properly I also realized that what I was seeing happen wasn't actually a realistic orbit but a truncated float version of what an orbit might be, that and I couldn't get my long term simulation of any particular orbit to work because the math just would not work.
If double precision and even possiblly even more were implemented, it would make Godot usable for professional simulations, as many of the Vector2, and likely Vector3 problems (such as https://github.com/godotengine/godot/issues/50251, which also happens when rotating _Vector2_s, and transform not multiplying correctly) would no longer be present. It would make Godot suitable for a range of professional applications, and I believe this would be high beneficial for the project.
My goal is to eventually create a universe simulator so I'm pretty excited to see what comes out of this.
As a somewhat tangential issue -- much of the builtin functions in C# (Math.Cos, Math.Sin, etc.) use doubles instead of floats. Passing in Godot outputs to these libraries are fine for 99.9% of use cases, but it can result in some notable fatal errors because of rounding:
var v1 = new Vector2(0.5421136f, -0.8403052f);
var c = 0.78f;
var v2 = new Vector2(0.9763634f, -0.2161357f);
var ansAsFloat = v1.Rotated(c).Dot(v2);
var ansAsDouble = (double)v1.Rotated(c).Dot(v2);
GD.Print("answer as float: ", ansAsFloat);
GD.Print("answer as double: ", ansAsDouble);
GD.Print("inverse cosin of float answer: ", Math.Acos(ansAsFloat));
GD.Print("inverse cosin of double answer: ", Math.Acos(ansAsDouble));
will output:
answer as float: 1
answer as double: 1.00000011920929
inverse cosin of float answer: NaN
inverse cosin of double answer: NaN
This is partially a user error of passing a float to a function that takes in a double and is bound to -1<=x<=1, but it is sneaky and frustrating. Having proper double support would make for a better CSharp story for these cases.
@DavidJVitale I don't think doubles will help with this particular problem. Instead of a value of 1.000001, it would end up being 1.00000000000001 or something, which is still beyond the acceptable -1 <= x <= 1 range. To fix this, you need clamping.
I can look into clamping for sure as a workaround. I guess I assumed if a Vector2
had double
values, the Rotated()
and Dot()
functions would work with greater precision for more accurate answers. It seems the current Dot()
implementation works flawlessly at never giving me a float that evaluates to > 1 if all my vectors are unit vectors. It's just the conversion to a double that can cause a >1 situation with the current implementation. I don't think a "double native" Dot()
implementation would lead to this situation.
It's just the conversion to a double that can cause a >1 situation with the current implementation.
This doesn't happen. The reason your float values appear to just be 1
is because it's not displaying the unreliable digits. It's still 1.00000011920929
before casting, but it only displays the 1.000000
part when printing.
GD.Print(ansAsFloat.ToString("0.############"));
Thank you @aaronfranke you are correct. I did some digging, the binary representation of ansAsFloat
is 00111111100000000000000000000001
, which is > 1 (by one lousy bit ;) )
So this is unrelated to doubles, sorry to bring this thread off-topic.
Is this proposal resolved now that https://github.com/godotengine/godot/pull/21922 is merged?
@Hoimar Not yet, there is still more work to do. One thing that still needs to be done is to add a CI build for doubles, so that we can ensure it keeps compiling. Another thing is that there are still things that don't work yet when doubles are enabled, those bugs need to be fixed. There is also still no work on the rendering side, currently things are just casted, but ideally we would have some kind of camera-centric rendering.
I have a branch ready to add a CI build but I haven't submitted a PR yet because there is one particularly fragile test case that breaks and causes the CI to fail. I've pinged the author of that test to ask if we can improve it, but I think that it might just need to be temporarily removed.
EDIT: Actually, I looked into this branch again today. It's a PR that would depend on a PR that would depend on a PR... so I'm waiting for other things to be merged in Godot first.
EDIT 2: The CI changes have been merged.
On the issue mentioned by @Hoimar and @Calinou on the editor maximum 10k view distance, would it be the case of having that as a variable set from some API method (maybe SomeSingleton.set_editor_far_limit(...)
or something) so less experienced users would never see this option on the interface and therefore won't break anything by accident, but those who want to take the risk and face the consequences could extend the range using a plugin?
On the issue mentioned by @Hoimar and @Calinou on the editor maximum 10k view distance, would it be the case of having that as a variable set from some API method (maybe
SomeSingleton.set_editor_far_limit(...)
or something) so less experienced users would never see this option on the interface and therefore won't break anything by accident, but those who want to take the risk and face the consequences could extend the range using a plugin?
I just checked the 3.x
source code and it already allows you to set the camera Far distance up to 1,000,000. As for the Camera node, you can enter a larger value manually in the inspector thanks to the or_greater
property hint.
As a reminder, when you design a 8000×8000 flat scene, the view distance you need to set to avoid clipping when viewing the scene from one of its corners is 8000 × sqrt(2)
(~11314). If you're designing a 8000×8000×8000 scene, the view distance you need to set to avoid clipping when viewing the scene from one of its corners is 8000 × sqrt(3)
(~13857).
Scenes larger than 8000×8000×8000 units can be viable with single-precision floats, but only if you don't have precision requirements above 0.001 units. This is generally fine for top-down or third-person games, but it may not suffice for first-person games if you need interaction with nearby objects.
@aaronfranke Thoughts on closing this and making a new proposal for the graphics api changes need to support this?
@fire I think it still makes sense to have this issue open for now, but we could open further proposals too. This proposal can be kept as a place for misc discussion of double-precision floats in Godot. There are still bugs that need fixing, and lots of testing that needs to be done, and probably people will have further questions.
I've read every comment on every thread related to this issue and concluded that I'd better not get involved in any capacity whatsoever.
That being said, is there anything I can get my hands on and break - in the name of progress?
This proposal is a summary and formalization of various past discussions about double support, especially issue #288 which this proposal directly supersedes.
Describe the project you are working on:
This proposal affects any game working with large scale environments in 3D, meaning, any environment larger than a few kilometers. This proposal is especially important for games taking place in the vastness of space. The problem technically also exists in 2D, but it is far less of an issue.
Describe the problem or limitation you are having in your project:
Any 3D game in Godot with large scale environments will begin to experience jitter once the player moves more than a few kilometers away from the world origin. The problem is most noticeable in FPS games, since objects tend to be close to the camera, and jitter is more clearly visible. This is caused by the limitations of single-precision floats. There are some workarounds for some use cases, but the only proper fix is one that is done on the engine level.
Describe the feature / enhancement and how it helps to overcome the problem or limitation:
The core issue is that single-precision floating point numbers have a limited amount of precision, which is unsuitable for games that use large scales. Single-precision floats have 23 significant binary digits (they are 32-bit, 8 of the bits are used for the exponent and 1 bit is used for positive/negative). First-person shooter games depend on the world having better than about half a millimeter of precision. The formula
0.0005 * (2^23)
shows us that errors big enough to notice appear approximately a few kilometers away from the world origin.The solution, simply put, requires us to add more significant digits. Double-precision floats are 64-bit, with 52 of those bits being significant binary digits. This is 29 more significant binary digits than single-precision floats, which increases the maximum usable area by a factor of about half a billion, to about 2 Tm (2 billion km). We go from a fifth the length of Manhattan to an area greater than the orbital radius of Saturn, more than enough for 99.99% of games. (Of course, you don't have to use all that area up to see benefit, any game larger than a few kilometers will benefit from doubles).
Describe how your proposal will work, with code, pseudocode, mockups, and/or diagrams:
For now, the plan is for this to be a completely optional feature which is not enabled by default, to maintain high performance on older devices. Anyone who needs double support can compile their own version of the engine from source. The rest of this section describes the details of how this will work.
C++ has a keyword called
typedef
that allows aliasing of types. Godot already uses this for thereal_t
type used for vectors and many parts of the engine. Eventually, users will be able to compile the engine withreal_t
being aliased todouble
, which means that all vector math is done with doubles, including transformations and all physics code. Pull request #21922 is a stepping stone towards double support, fixing many of the issues that currently exist when trying to compile with doubles.On the CPU, for the most part, doubles are equally as fast as single-precision floats. The x86 architecture does not have circuits for single-precision floats, the FPU elevates all floating-point types to an 80-bit extended precision format internally, and truncates the result. Doubles take up twice the amount of memory, which can be an issue for architectures not optimized for moving around pieces of 64-bit data (such as 32-bit architectures), but otherwise the total memory usage of the engine does not change very much. There is also the matter of SIMD vector instructions, designed to perform math in parallel. Godot does not currently use these, but if it did, full acceleration would require AVX2 (256-bit for 4 * 64-bit), which means Intel CPUs from 2013 or later, and AMD CPUs from 2015 or later. A Windows 11 compatible x86 CPU will have AVX2.
It's important to note that doubles cannot be used on the graphics card. Due to Nvidia intentionally crippling support for doubles on non-Quadro graphics cards, rendering has to be done with single-precision floats. The approach used by all games that use doubles is to do all of the CPU-side math in doubles, then take all coordinates and convert them to be relative to the camera, then pass this information to the GPU. The exact details of this will be left to @reduz to deal with.
If this enhancement will not be used often, can it be worked around with a few lines of script?:
No, it cannot be worked around in a few lines of script. However, let's explore what could be done.
A fair question to ask is how other games handle large scales.
Most games don't. It's true that this feature is only truly needed for a small amount of games, as the majority of games take place on scales smaller than a few kilometers. For games that need somewhat large scales, sometimes maps are designed around this constraint, to be square and approximately 4 kilometers in radius, such as PlanetSide 2's Indar map.
Kerbal Space Program (KSP) is a game created in Unity, which (like most engines) uses single-precision floats. The developers of KSP had to implement their own math types, doing a huge amount of calculations in user code. Even with all their effort, KSP struggled with floating-point issues for many years, and these issues came to be known as The Kraken. The ideal solution is for the engine to have first-class support.
A commonly cited technique is origin shifting. This involves moving the world around the player such that the player is always near the world origin. This technique can work, but it comes with many of its own limitations. For example, it doesn't always work for multiplayer, where the server needs to have precision for all players at once. There are many tricks to make this work better, but this heavily complicates things to the point that it's both easier and more efficient to use doubles.
Some games that use doubles for large scales include Star Citizen, Arma 3, Space Engineers, and Minecraft. Star Citizen uses a custom version of Amazon Lumberyard with double support added. Arma 3 uses their own in-house engine which they call "Real Virtuality" which uses doubles. Space Engineers and Minecraft both do not use an engine, but also, Minecraft in its early days (incorrectly) truncated the coordinates, which led to issues such as the jittering seen in Far Lands or Bust (explained here).
Unreal added support for doubles with the release of Unreal 5 to support planetary-scale games. There's also Unigine, which is focused on being an engine for simulations, and Unigine can use doubles.
Is there a reason why this should be core and not an add-on in the asset library?:
This is by nature a core engine feature, and it cannot be an add-on. However, if anyone wishes to take the limited KSP approach, I do have this repo with some math types for C#.