Open azriel91 opened 1 year ago
Are there no other servers running at the same time? The current implementation uses a single file watcher instance for however many servers are started. Can you reproduce this issue with just this single project being opened in ST?
On Mac the LSP-file-watcher-chokidar process seems to be using around 60MB on that project but since the file watching implementations can vary greatly between operating systems, that might not mean much.
Heya, I haven't gathered solid evidence, but I think there's only one instance -- I only use LSP-rust-analyzer
with sublime text, and I think it keeps at most one instance around.
More importantly, I think the issue is aggravated by what I was doing, which is a combination of the following:
LSP-rust-analyzer
and LSP-file-watcher-chokidar
rust-analyzer
binary with a symlink to ~/.cargo/bin/rust-analyzer
LSP-rust-analyzer
or this plugin (more likely the former)I switched back to the vendored RA, and the out-of-memory from the chokidar plugin still happened, with a slightly shorter stack trace:
// I removed all the `LSP-file-watcher-chokidar: ERROR: ` prefixes
<--- Last few GCs --->
[6820:0x6804010] 653289 ms: Mark-Compact 7993.0 (8234.1) -> 7981.7 (8238.6) MB, 3425.54 / 0.00 ms (average mu = 0.546, current mu = 0.012) allocation failure; scavenge might not succeed
[6820:0x6804010] 658644 ms: Mark-Compact 7997.6 (8238.6) -> 7986.6 (8243.9) MB, 5327.27 / 0.00 ms (average mu = 0.306, current mu = 0.005) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0xc8d700 node::Abort() [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
2: 0xb6b8f3 [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
3: 0xeac370 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
4: 0xeac657 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
5: 0x10bdcc5 [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
6: 0x10d5b48 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
7: 0x10abc61 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
8: 0x10acdf5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
9: 0x108a366 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
10: 0x14e5196 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
11: 0x7f031bed9ef6
LSP-file-watcher-chokidar: Watcher process ended. Exception: None
It's much stabler now when using the vendored RA, so I guess that means:
LSP-file-watcher-chokidar
doesn't crash very much with the vendored RA.LSP-rust-analyzer
, the plugin needs to handle the change that is causing instability1.1 Sorry I don't have logs from the nightly-RA + LSP-ra
interaction -- nothing besides the above stack traces appeared in the sublime text console, so I couldn't work out what the issue was.
Heya, I'm using this alongside
LSP-rust-analyzer
, and getting the following crash:stack trace
```rust <--- Last few GCs ---> [15621:0x7092010] 231914 ms: Mark-Compact 8049.4 (8233.2) -> 8038.4 (8238.4) MB, 2773.92 / 0.00 ms (average mu = 0.754, current mu = 0.008) allocation failure; scavenge might not succeed [15621:0x7092010] 236639 ms: Mark-Compact 8054.4 (8238.4) -> 8043.2 (8243.2) MB, 4691.80 / 0.00 ms (average mu = 0.500, current mu = 0.007) allocation failure; scavenge might not succeed <--- JS stacktrace ---> FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory 1: 0xc8d700 node::Abort() [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 2: 0xb6b8f3 [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 3: 0xeac370 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 4: 0xeac657 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 5: 0x10bdcc5 [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 6: 0x10d5b48 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 7: 0x10abc61 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 8: 0x10acdf5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 9: 0x1089436 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node] 10: 0x107af34 v8::internal::FactoryBaseThe 8 gigs is what I added to my environment using:
I couldn't figure out the reason so much memory is used, but the code I work with is relatively large (repo, 55k LOC for the project itself, +429 dependencies).
Can you see something in that stack trace that I can't?