Open KlausC opened 3 years ago
If the expected result exceeds a certain ration of available memory, the operation should be terminated with an error exception. Unclear, if that could be handled in upstream GMP library.
I'm not seeing segfaults on macOS (on either v1.6 or a week-old master) — I can actually construct the monstrosity you posted and increasing its value by a few thousand orders of magnitude leads to OOMkills, not segfaults.
was this fixed by https://github.com/JuliaLang/julia/pull/51243?
leads to OOMkills
Yes, with the v1.10-rc1 I see "only" OOM-kill (of vscode!) after a long time of populating my memory. What I suggest is, do it more gracefully and faster.
The OOM killer is not part of Julia, its part of the OS, so Julia does not control it. Most *x systems do not have a hard memory limit that Julia can test against. The OS will happily let processes allocate more memory than the machine has, then start swapping (probably why it took a long time) until the OOM killer decides to intervene. Perhaps your OS allows you to configure the OOM killer (perhaps to off! :-).
julia> convert(BigInt, big"1.5e-9")
ERROR: InexactError: BigInt(1.5e-09)
since there's already an error check, I think an about-as costly check to limit at e.g. 1 GB-large numbers could be argued, or even 1-MB-sized (or cutoff at some exponent). It's not like they are very useful, likely a user error (potentially end-user entering through a web-site). The convert is likely rare and not speed-critical.
Julia tried to take physical memory into account when allocating, i.e. staying within some limits, but GMP it calls likely doesn't. You don't want arbitrary limits for GMP itself, and maybe not a check on allocations there slowing down each calculation operation.
What about a user-definable global limit, like setprecision(BigFloat, ...)
.
The following Segmentation fault happens for
v1.6.1
up tov1.8.0-dev
. The error occurs in libgmp; if we cannot fix that, maybe we should gracefully deny execution, if the result would become too big,