electron / electron

:electron: Build cross-platform desktop apps with JavaScript, HTML, and CSS
https://electronjs.org
MIT License
113.65k stars 15.29k forks source link

[Bug]: memory limitations introduced with Electron 14+ #31330

Closed marcelblum closed 2 years ago

marcelblum commented 2 years ago

Preflight Checklist

Electron Version

14/15+

What operating system are you using?

Other (specify below)

Operating System Version

observed on latest Windows 10 & macOS 11

What arch are you using?

x64

Last Known Working Electron version

13.5.1

Expected Behavior

Electron processes should not have a hard memory limit of ~8GB, but this limit was introduced with Electron 14.

Actual Behavior

With Electron ≥14, loading operations fail (or process crashes with oom) when attempting to load data into RAM if memory use exceeds ~8GB. This limit did not exist (or was significantly higher) in versions 13 and earlier.

Testcase Gist URL

https://github.com/marcelblum/electron-memory-tests

Additional Information

Simple demo to reproduce:

  1. clone repo at https://github.com/marcelblum/electron-memory-tests
  2. npm install will install both latest electron and electron 13.5.1
  3. npm run start-electron-13
  4. repeatedly click the "fill renderer process RAM with 2GB" and/or "fill main process RAM with 2GB" buttons. For demo purposes this just creates a quick dummy buffer with Buffer.alloc(). In my tests in Electron 13 on several Windows & Mac machines with installed RAM varying from 8-16GB, I can do this with each process going up to at least 16GB until errors start to occur.
  5. npm run start-electron-latest and repeat as above. In Electron 14/15/16/17 there is a clear hard limit at around 8GB per-process when loading starts to fail.

I have tried experimenting with max_old_space_size e.g. app.commandLine.appendSwitch('js-flags', '--max_old_space_size=16384') or require('v8').setFlagsFromString('--max-old-space-size=16384') with no effect observed.

It may be an edge case to need this much memory but it's not too difficult to run up against this new limit when loading lots of high res media files. In my own app using the web audio api, I noticed this limit pretty quickly after updating from Electron 13.

Is this an intentional newly introduced limit? Is there any way to increase this limit? I assume this is not a Chromium issue since the limit also applies to the main process.

Update: Looks like Electron 11 and earlier had no memory limits, but starting with Electron 12, a ~16GB limit was imposed, and then with Electron 14 that became a ~8GB limit. I've added Electron 11.5 to my demo for comparison via npm run start-electron-11.

vladimiry commented 2 years ago

I have tried experimenting with max_old_space_size e.g. app.commandLine.appendSwitch('js-flags', '--max_old_space_size=16384') or require('v8').setFlagsFromString('--max-old-space-size=16384') with no effect observed.

Also try adding the --js-flags="--max-old-space-size=16384" command line argument (when you execute the binary).

marcelblum commented 2 years ago

@vladimiry I've tried electron.exe --js-flags="--max-old-space-size=16384" main.js without any resulting change in behavior. Is that what you mean?

vladimiry commented 2 years ago

Is that what you mean?

Yes. The command line argument is the way to apply the arg to main process too (the app.commandLine.appendSwitch way is only helpful for the renderer process).

vladimiry commented 2 years ago

Electron processes should not have a hard memory limit of ~8GB, but this limit was introduced with Electron 14.

I remember the limit was much lower than ~8GB, something like 2GB. And my understanding is that the limit was coming from the nodejs defaults. But the --js-flags="--max-old-space-size=N" CLI arg used to help.

marcelblum commented 2 years ago

Out of curiosity, I tested all the way back to Electron version 1.8.8 (2018) and could not hit this limit in my simple demo, without even using any command line switches the renderer process memory was able to fill up way past 8GB without problems.

vladimiry commented 2 years ago

I don't know what was the exact limit for my case (not direct buffer allocation but filling JSON with a huge pack of data, so different case) but based on my observations it was about 2GB. I guess it might depend on the platform and other specifics. But I remember that increasing max-old-space-size was helpful for resolving the "Illegal Instruction (Core Dump)"-like issues in ElectronMail project.

It looks like Buffer uses memory allocated outside of v8's heap (called "external memory"). So can you also try the electron.exe --js-flags="--max-heap-size=16384" main.js way of starting the app (using max-heap-size instead of max-old-space-size)?

Out of curiosity, I tested all the way back to Electron version 1.8.8 (2018) and could not hit this limit in my simple demo, without even using any command line switches the renderer process memory was able to fill up way past 8GB without problems.

I'd also want to know what really changed in v14 in relation to memory limits, so I've subscribed to this issue.

Prinzhorn commented 2 years ago

Possibly related to V8 9.2 "Shared Pointer Compression Cage"?

https://v8.dev/blog/v8-release-92#shared-pointer-compression-cage

It's the only thing I found in the changelog that looks related.

The tradeoff of the change is that the total V8 heap size across all threads in a process is capped at a maximum 4GB. This limitation may be undesirable for server workloads that spawn many threads per process, as doing so will run out of virtual memory faster than before. Embedders may turn off sharing of the pointer compression cage with the GN argument v8_enable_pointer_compression_shared_cage = false.

marcelblum commented 2 years ago

For some variety in the memory tests in case of any quirks with Buffer.alloc(), and for a little more real-worldy-ness, I just updated my demo with an option to load and reload a huge audio file via decodeAudioData() (a heavily compressed 1 hr mp4 that inflates into 2.7 GB in a high res audio context). In my testing, the same 8GB limit is hit in Electron 14+ when attempting to decode it multiple times, and no such limits are present in earlier versions tested down to 2.0.18.

@vladimiry I tried some tests with max-heap-size, but no effect noticed.

marcelblum commented 2 years ago

Testing a bit more on Windows, an interesting finding: Electron 11.5 and earlier permit even higher memory use than Electron 12/13. Up to Electron 11.5 I can load past 20GB per process with no issues. Starting with 12 there seems to be a cap at around 16GB. And then with 14 that was halved again.

vladimiry commented 2 years ago

Also see https://github.com/electron/electron/issues/22705#issuecomment-887744665 CC @nornagon

V8 pointer compression puts a 4GB maximum on the heap size. So on machines with >16GB, there is no way to increase the heap size from the default.

So it looks like after the pointer compression joined the game the only way to increase the limit is to compile own @electron bundle from sources with disabled "pointer compression" stuff and then set the max-heap-size / max-old-space-size arg?

heylying commented 2 years ago

So it looks like after the pointer compression joined the game the only way to increase the limit is to compile own @electron bundle from sources with disabled "pointer compression" stuff and then set the max-heap-size / max-old-space-size arg?

I wanna know the solution too. @nornagon

marcelblum commented 2 years ago

@vladimiry I can't help but feel that doesn't quite explain the results that I am seeing in my tests: 8gb limit per-process starting with electron@14, 16gb limit per-process from electron@12, and seemingly no limit in electron<=11, all without touching the default max-heap-size / max-old-space-size.

This may betray my limited knowledge of low-level memory management, but wouldn't max-heap-size only cap the maximum contiguous memory block size that could be used per-process, rather than the total addressable memory per-process? In other words we couldn't have a single object in memory that's more than 4gb, but we could still have multiple objects lower than that limit which when combined could exceed 4gb.

vladimiry commented 2 years ago

@marcelblum, I think we need to wait for @nornagon 's input who contributes a lot to @electron.

bruceauyeung commented 2 years ago

@heylying @nornagon
Does app.commandLine.appendSwitch('js-flags', '--max-old-space-size=N') not work for renderer process anymore in electron V9.x.y and later versions? because of V8 pointer compression ?

ckerr commented 2 years ago

I'm not @nornagon but I believe this is due to the V8 9.2 pointer compression changes. I tried out @marcelblum's test repo on Electron 14.0.0-nightly.20210413 (which was Electron's last pre-V8 9.2 release, using 9.1.127-electron.0 and then the next nightly from two weeks later, 14.0.0-nightly.20210426 (V8 9.2.2-electron.0). 20210413 can grow past 8 GB, while 20210426 hits the limits reported here.

Since pointer compression is a compile-time option, I don't believe there's any way to work around this limitation without rebuilding Electron with pointer compression disabled. I can't speak for the entire project, but IMO it's unlikely that alternate builds would be made available for this: doubling the number of binaries that Electron puts out doesn't seem tenable for a use case that few apps are asking for.

Given all that... with enough CPU time, it's not that difficult to roll your own Electron. If crossing the V8 memory ceiling is a hard requirement for your project, my suggestion is to DIY an electron build with v8_enable_pointer_compression disabled in BUILD.gn.

I'm closing this issue because I don't think there's any further action that the project can take on this, but @nornagon (or any other maintainer) feel free to re-open if you feel differently!

marcelblum commented 2 years ago

Ok @ckerr thanks for confirming the cause. This is obviously a disappointing non-resolution though, and something of a shock to those of us who have built up large projects relying on Electron not expecting this undocumented major change. I do wonder how much of a "rare" use case this really is. Something like a Figma, I would imagine, could hit up against these limits in more complex hi res projects.

I will eventually have to look into manually building Electron, but until then it's #Electron134Lyfe.

Nantris commented 2 years ago

This doesn't affect our project, but it seems like this issue should not be closed.

nornagon commented 2 years ago

Unfortunately we don't really have any great options here. Pointer compression is a compile-time option enabled by default in Chromium, with a wide variety of implications, one of which is the 4GB heap limit. You can read more about the technical details here: https://v8.dev/blog/pointer-compression. Further, there are new & upcoming security features in V8 which depend on pointer compression being enabled, such as the V8 sandbox.

Node.js disables pointer compression at build-time by default: https://github.com/nodejs/node/blob/3f403e5e285f350620f7d66329fa890806f7fc7a/configure.py#L445-L449

Options are:

  1. Do nothing. Leave pointer compression enabled, dropping support for >4GB heaps.
  2. Disable pointer compression. This would significantly increase memory usage, reduce performance, and reduce security, as well as diverging us from Chromium.
  3. Ship a secondary set of builds with pointer compression disabled. This would double our already sky-high release build time and asset count.

In my opinion, (2) and (3) are unacceptable tradeoffs for the benefit provided.

Would you be able to tell us a little more about your use cases that require >4GB heap space? There might be ways to rearchitect things to fit in a smaller heap; for instance, ArrayBuffer data is not stored on-heap, so it's possible to hold more than 4GB of data if it's in ArrayBuffers.

marcelblum commented 2 years ago

Thanks for your input @nornagon. I've read that v8.dev blog post several times over the past couple months by now, and while some of the lower level concepts are beyond my knowledge area, my takeaway is that this feature goes a long way towards memory conservation for the typical browser user - ie 20 tabs open of Facebook & NYT etc. each with iframes, ads, videos, bloat - while imposing some limits that are mostly irrelevant for same user. Whereas with Electron, where the dev is here for the desktop app development framework that just happens to be a browser engine with all the unfun limits removed, the changes introduced by pointer compression seem less beneficial, even regressive. That said, I understand the far reaching ramifications of disabling pointer compression in Electron (though it could be argued Electron was doing just fine without these "benefits" through v13!).

As far as use case, sure your typical note taking or chat app shouldn't need that much memory, but anything juggling lots of high res media can reasonably hit the new limit, especially those using some of Chromium's greatest newer features that enable video & audio editing/generating, complex games, content creation tools, etc. To give a more specific example, in my case I'm using the Web Audio API which has grown quite powerful, but doing realtime processing at low latency requires loading entire audio files to RAM. Chromium supports up to 384kHz multichannel 32-bit audio which takes up gobs of memory. In practice obviously no one would use 384kHz audio on a website and even most pros don't record beyond 192kHz. But Electron offers the promise of tapping these capabilities. Lets take a more realistic example and say I wanted to create/edit a song mixing 15 stereo tracks @32-bit/96kHz, each 7 minutes long. The cool thing is, Electron/Chromium can do this and do it well! But not even including a UI, that's 15 * 7 * 60 * 2 * 96000 * 4 = 4.8GB needed, and if we want to go multichannel or higher quality sampling rate or longer/more tracks, that will inflate tremendously. Note: Part of this stems from the inflexibility of the Web Audio API as it doesn't offer an efficient way to only partially load sound files (have to decode the entire file to RAM first) or stream from disk (can only be done with a MediaElementAudioSourceNode which brings limitations), but I also think those types of features are understandably out of the scope of Web Audio, and shouldn't be as much of a concern for a desktop app anyway. Bottom line, if you want to be able to quickly bounce around a multitrack audio project in realtime while keeping things in sync, those tracks pretty much all need to be fully in memory for things to work smoothly.

And if you wanted to add visual media into the mix, the memory needs increase even more. On the one hand I get that it's asking quite a bit of Electron to do these things and I'm not trying to create a full DAW or video editor with Electron - but on the other, the Chromium devs have brought these powerful features to their engine, and they are just sitting there begging to be used and pushed to their limits... 😉

If you don't mind my picking your brain a bit further, your comment has raised several questions for me to help better understand the issue and limits:

4GB heap limit

Could you elaborate on what this means in practice and how that jives with my test results above, which show an 8GB limit (not 4GB) per process in v14+?

Node.js disables pointer compression at build-time by default

But are you saying this is not the case in Electron's implementation of Node? My test results were that even Electron's main process has its own 8GB limit in v14+. I understood that the main process is an instance of Node.js, is that incorrect? Or does Electron's bundled version of Node share the same build of V8 with Electron's Chromium, with pointer compression enabled?

ArrayBuffer data is not stored on-heap, so it's possible to hold more than 4GB of data if it's in ArrayBuffers.

But my tests above do just that - use Buffer.alloc() to fill memory with dummy data - and still hit this 8GB limit. In what context would ArrayBuffers not be constrained by these limits?

nornagon commented 2 years ago

But my tests above do just that - use Buffer.alloc() to fill memory with dummy data - and still hit this 8GB limit. In what context would ArrayBuffers not be constrained by these limits?

Ah, sorry, you're correct on this point. My test was flawed because I assumed a 4GB limit and was able to exceed that; given that the limit is 8GB, I do see experimentally that ArrayBuffer data isn't stored off the heap as I had guessed.

wasm64, or a native module which allocates off-heap, might be the only workarounds in that case :/

MarcCelani-at commented 2 years ago

We are running into this limit because VSCode uses electron to run typescript, and we have a large mono-repo. We are working around this with a package called tsserver-bridge, which replaces tsserver.js with a thin wrapper that runs node (which does not have pointer compression on by default). Unfortunately, that package relies on the presence of a node_modules directory, which will block our migration to yarn plug'n'play. Without tsserver-bridge, many of our developers complain of tsserver being extremely unreliable, crashing due to hitting this memory limit.

Related: https://github.com/microsoft/vscode/issues/127105

nornagon commented 2 years ago

I don't suppose it would make sense to work with typescript to have it use less memory? 😅 8GB is quite a lot.

But: that's also a pretty compelling real-world use case. Something that occurred to me would be to see if it would be possible to choose between pointer-compression and non-pointer-compression at isolate construction time, rather than compile time. I'm not sure if that's technically feasible; I'll try to check with the v8 team.

MarcCelani-at commented 2 years ago

Thanks @nornagon . Are we at a point that we re-open this issue?

nornagon commented 2 years ago

Unfortunately it looks like making pointer compression a runtime-toggleable flag is technically infeasible; a v8 dev told me it would mean gating every memory access behind a load and branch. So it looks like we are back to square one: either we have it enabled or we don't.

One possible option for tools that really need a jumbo-sized V8 heap would be to bundle Node with the app. I assume that's more or less what @MarkCelani-at is talking about with tsserver-bridge. It's unclear to me what the node_modules directory has to do with that, but it seems like a solvable problem. Node's v8 is compiled with pointer compression disabled, so it would be fine to expand the heap size there.

I'm quite reluctant to disable pointer compression project-wide. As such, I'm not sure I want to reopen this issue just yet. It seems like for many use cases there exist feasible workarounds, such as running a separate Node process, or building a custom version of Electron (which I know VS Code does), so I'm not yet convinced that the right option is to disable pointer compression by default.

Nantris commented 2 years ago

Couldn't Electron be compiled with pointer compression disabled by developers with use cases that require it? These are definitely edge cases and I don't think they warrant disabling pointer compression for Electron when nearly every other use case benefits from pointer compression.

nornagon commented 2 years ago

Yep, it sure can! It should work just fine to set the v8_enable_pointer_compression = false variable when compiling.

marcelblum commented 2 years ago

FWIW I've seen no significant quantifiable difference in total memory footprint by my own Electron project in versions pre-pointer compression vs post. Testing on Windows 10 my app uses about 200MB RAM on startup with a single renderer window open. It's actually a few MB above that when built with Electron 15 vs a few MB below in Electron 13. If I spawn 2 more BrowserWindows the Electron 15 version uses around 40MB less than the Electron 13 version (345MB in Electron 15 with 3 windows open vs 385MB in Electron 13). Overall, nothing earth shattering and a mixed bag at best. I do wonder what kind of real world use cases pointer compression actually benefits, when we're not talking about, as I mentioned above, an actual browser with 20 tabs open.

Nantris commented 2 years ago

@marcelblum you make a good point about the difference in usage between a browser and how most Electron apps are used.

That said, our app has three windows simultaneously, and in your test your two additional browser windows are presumably bare-bones, right? In our case, the windows aren't barebones, and even a 40mb reduction is nothing to scoff at.

I'm not a maintainer or anything, but my vote is definitely that the standard, pre-compiled version of Electron should have pointer compression enabled. It benefits most use cases - but I feel your pain having run into this unexpected limitation.

marcelblum commented 2 years ago

@Slapbox I wouldn't call it bare bones, the numbers I noted were what I saw after all assets of my app were loaded, that includes several js libraries & native node modules, and a ui with dozens of svgs and several canvas contexts. The additional BrowserWindows were essentially duplicates of the first with the same full ui & new instances of all the same libraries & modules. Obviously every app is different but I wonder how many Electron apps even use more than 1 BrowserWindow. But granted, there is more of an argument for a benefit for the multiple window use case, and I know @nornagon mentioned V8 security features that depend on pointer compression and I don't want to discount that.

jpambrun commented 2 years ago

I am a bit confused by the explanations contained in this thread. I can allocate 15GB of 1GB typedArray no problem in chrome Mac, ~8GB on Windows and I have a issue opened on chromium to increase this ceiling on Windows[1].

The odd part is that the ceiling seems to depend on the size of the buffers allocated. (look at https://github.com/electron/electron/issues/33962 and results here https://github.com/masa-nuc/arraybuffer-test/blob/master/results/README.md).

with 1MB chuncks => exception at 3.99GB across ~4000 buffers 
with 8MB chuncks, => exception at 6.39GB across ~798 buffer
with 128MB chuncks, => exception at 7.75GB across  ~60 buffer
with 1000MB chuncks, => exception at 7.00GB across 7 buffers

And I can reproduce the same behavior in Chrome by modifying the example in [1]. I imagine the hard limit at 8GB is the same as described in this issue, but the limit with smaller buffers (1/8MB) is completely unexpected.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=1243314

MarcCelani-at commented 2 years ago

This is becoming a major issue.

The latest versions of typescript editor services do not work with the tsserver-bridge workaround. Our mono repo will not be able to upgrade to the latest version of typescript. There is going to unfortunately be a lot of finger pointing between electron, vscode, and typescript as to who should solve this problem.

I think supporting a higher heap size is a reasonable request. Please, please, please consider re-opening this issue and fixing the problem.

Nantris commented 2 years ago

@MarcCelani-at you can compile Electron from source. Not great, but it's an option.

I think it would be ideal to offer a build with pointer compression disabled as this seems to be an issue that slowly affects more and more people - but I'm not sure of the feasibility of that, especially if it would mean offering an essentially duplicate build for every version of every architecture - but having users build from source isn't terribly reasonable either if this breaks Typescript projects.

But there is some mention of security implications of disabling pointer compression so....

In any event, I don't think this should be closed @ckerr.

deepak1556 commented 2 years ago

VSCode has workarounds for this issue https://github.com/microsoft/vscode/issues/127105#issuecomment-1124504462 using vanilla node environments, and we won't be flipping the pointer compression default for the desktop application to align with Chromium, Electron.

MarshallOfSound commented 2 years ago

In agreement with the maintainers above, flipping the default has costs that do not outweigh the benefits. For now workarounds similar to vscodes are your path forward if these memory limitations actually impact you.

nornagon commented 2 years ago

For reference, here's a thread from the Node TSC in which they ended up not really making a decision about enabling pointer compression: https://github.com/nodejs/TSC/issues/790

jssdgit commented 2 years ago

Can someone provide a pointer (pun intended) to instructions on HOW to build your own custom electron with v8_enable_pointer_compression = false set? I have been struggling with this for days, especially given that it takes 8 hours to compile and link electron on my machine. Here's what I did, and it didn't work:

(1) gclient sync -f (2) gn gen out/Release --args="import(\"//electron/build/args/release.gn\")" (3) Manually edit //v8/BUILD.gn to set v8_enable_pointer_compression = false and v8_enable_pointer_compression_shared_cage = false (4) ninja -C out/Release electron

I know it didn't work because I then ran a canned script to allocate fixed length arrays. That canned script can get to 16 GB on older versions of electron (e.g. 12.2.3) but only gets to 8.3 GB with the custom build (off of nightly) that I do myself.

Thanks!

marcelblum commented 2 years ago

An interesting recent finding (at least interesting to me 😄): On macOS (both arm & intel), since Electron 17 through to nightly, the limit has increased to 16GB. This is much less of a hindrance than 8GB, needless to say, as it's a lot harder to hit 16GB even with hi res media use. I wonder what led to that change, which presumably has nothing to do with pointer compression. Meanwhile on Windows Electron remains very hard limited at 8GB on all versions since 14. I would go as far as to say that this is now primarily a Windows-only problem from where I sit.

Revisiting this after many months and it's dismaying to see that many fundamental questions on this front remain unanswered/undocumented. Chief among them: What are the exact limits and in what usage contexts and what platforms? There is a lot of FUD surrounding this subject to be pieced together from various threads and incomplete info sources (4GB? 8GB? 16GB?). Why do different platforms seem to have different limits? Has anyone been able to successfully disable pointer compression in practice and get an observable change in memory handling? I've tried a bit and I haven't, posting WIP results at #35032. It's a difficult problem to iterate over given the 8+ hour compile time.

marcelblum commented 2 years ago

Seems like the macOS limit increase to 16GB is due to upstream Chromium changes made per https://bugs.chromium.org/p/chromium/issues/detail?id=1232567

Still haven't grokked why the same can't be done for Windows as this reads like "a Googler was getting crashes running Figma in Chrome on Mac so we increased the limit for them" 🤦‍♂️

MarshallOfSound commented 2 years ago

Still haven't grokked why the same can't be done for Windows as this reads like "a Googler was getting crashes running Figma in Chrome on Mac so we increased the limit for them" 🤦‍♂️

Welcome to crbug 😅 I'm reading that as the 8GB limit on Windows is enforced by the sandbox not by the heap limitations in V8. E.g. try allocating the large buffers in the main process, if you can then the V8 flags aren't being applied anymore. The limit from the sandbox is a separate problem that you could probably validate by running with --no-sandbox

jssdgit commented 2 years ago

FWIW I built electron on Windows 10 with v8_enable_pointer_compression=false and then added the --no-sandbox argument via app.commandLine.appendSwitch('js-flags', '--max-old-space-size=16384 --no-sandbox'); in the javascript file run by electron. Still hitting the limit at 8 GB. So no help, unless I misunderstood where you intended the --no-sandbox argument to go.

vladimiry commented 2 years ago

added the --no-sandbox argument via app.commandLine.appendSwitch('js-flags', '--max-old-space-size=16384 --no-sandbox');

Try passing args to the app executable like --js-flags="--max-old-space-size=16384" --no-sandbox. Doing it via app.commandLine.appendSwitch is likely too late for to be applied for the main process.

marcelblum commented 2 years ago

Well just running latest Chromium with --no-sandbox does not remove the limitation on Windows either. I believe it's because the limit is hardcoded here.

jssdgit commented 2 years ago

added the --no-sandbox argument via app.commandLine.appendSwitch('js-flags', '--max-old-space-size=16384 --no-sandbox');

Try passing args to the app executable like --js-flags="--max-old-space-size=16384" --no-sandbox. Doing it via app.commandLine.appendSwitch is likely too late for to be applied for the main process.

I noticed that I did not use appendSwitch correctly. I tried having two separate calls: app.commandLine.appendSwitch('js-flags', '--max-old-space-size=16384'); app.commandLine.appendSwitch('--no-sandbox'); This didn't work either. Per @marcelblum above, I wouldn't have expected it to work anyway.

In old versions of Electron it seemed like app.commandLine.appendSwitch('js-flags', '--max-old-space-size=16384'); in the main javascript file worked for me. I'm trying to adjust the browser application within Electron, not the main application itself.

marcelblum commented 2 years ago

As I understand the latest docs, sandboxing in Electron is off by default. Certainly if you are instantiating a BrowserWindow with webPreferences: {nodeIntegration: true, contextIsolation: false} sandboxing is off.

edit: actually looks like this was just changed with the new release of electron today, sandboxing is now on by default in electron 20 unless nodeIntegration is on. But note Chromium sandboxing and v8 sandboxing are 2 totally different things.

marcelblum commented 2 years ago

I was able to bump the Windows memory limit in a custom build, more details here. TLDR: The culprit wasn't pointer compression, rather a limit in Chromium which only applies to Windows sandboxing but was (seemingly clumsily) hardcoded to be in effect regardless of sandbox status. And I guess Chromium devs also were trying to protect js devs from themselves and put in a hard limit on memory leaks from bad js code.

Some thoughts:

Pointer compression is not the limiting factor in most high memory use cases because most activities that that use a lot of memory - media loading, buffers for binary data, etc. - are off heap. I guess if you wanted to have a >4GB string variable, or a giant json object with millions of entries, that's where the pointer compression limits would come in (?). But I think for most of use cases for the concerned parties on this thread the problem can be traced to those hardcoded Chromium limits introduced around Chromium 93 which made it into Electron 14, and then were modified for macOS around Chromium 98.0.4758.141 / Electron 17.3.

Proposed solution: Either request the change upstream so that the limit in Chromium is only enforced if sandbox is on, which would trickle down nicely into Electron, or Electron should patch its version of Chromium with this change (and I'd vote for removing the limit entirely rather than 16gb if there are no bad side effects).

Note with sandbox on in Windows it's impossible to exceed 8gb due to platform limits. But with these changes Electron devs would be able to sidestep these limits by using nodeIntegration on / sandbox off.

jssdgit commented 2 years ago

@marcelblum May I award you the Nobel Prize for Chromium memory problem debugging? I've got a custom build running right now with the changes you made. I'll be able to reproduce your results in about 5 more hours :-) Hoping the powers that be in the Electron project make the changes you suggest, but in the meantime, I am in much better shape just being able to make a custom build that skips the memory problem. Thanks!

marcelblum commented 2 years ago

@jssdgit or anyone else interested, I've made this experimental build available here https://github.com/marcelblum/electron-windows-memory-unlimit

nornagon commented 2 years ago

@marcelblum nice debugging. Perhaps consider submitting a PR?

marcelblum commented 2 years ago

@nornagon I got the impression @MarshallOfSound is strongly against this since it would require patching Chromium, but I'd be happy to if there is agreement on that being the best way forward. But in the mean time it's looking like there's a decent chance the Chromium team will be bumping the limit to 16 on their end. If that were the case the only question that remains is if Electron would patch it to increase that further.

marcelblum commented 2 years ago

If anyone's curious, this upstream change (Chromium increase in hardcoded memory limit on Windows from 8GB to 16GB) is now live in Chrome beta channel as of circa v106. You can test for yourself running latest Chrome Canary with --no-sandbox. I see that latest Electron@nightly v21 is using Chromium v105.0.5187.0 so it does not include this change. Hopefully by the time Electron 21 graduates to release status it can upgrade to Chromium >106.