Open ehames opened 6 years ago
I'm experiencing the same issue but also with very high CPU usage as well. This issue is a lot more noticeable with diagnostics enabled.
aah, yes this is very likely related to having diagnostics enabled. When you enable diagnostics it enables the typechecker, which can be a huge memory hog. Unfortunately there is no quick fix here, other than disabling diagnostics.
Note: as stated by @keegancsmith this happens if the typechecker gets started, which happens quite a few times: diagnostics, references, implementation... see https://sourcegraph.com/github.com/sourcegraph/go-langserver@7df19dc017efdd578d75c81016e0b512f3914cc1/-/blob/langserver/loader.go#L27:23&tab=references
yeah this is a pretty bad problem for us, since the typechecking is so useful :) I think the future is bright (once we have time to implement it), since the caching stuff in go has a lot more useful information for us, which means we can probably rely on the on-disk caching go has now.
This seems to be closely related to #209. Both issues are due to the typechecker.
I have to periodically kill the language server and live with missing features with vscode. Is there anything I can help with? what logs/traces can I extract the next time this happens?
Having the same issue here. It sometimes takes up all available memory (e.g. 30GB), resulting in OS freezing. Just a guess, but this feels more like a bug than an issue of something being inefficient. Any details I can provide?
You can set "go.languageServerFlags": ["-pprof", ":6060"]
in your VS Code settings and then follow the steps in the README to capture a heap profile and upload the SVG. That would tell us where the memory is allocated. I agree this looks more like a regression, but we can't know without more details I think.
If the memory usage is coming from typechecking and not a regression in e.g. leaking memory, then we likely cannot do anything yet. The long term fix for this will be in the official Go language server which the Go developers are working on actively (it is a difficult problem to solve).
I've been using the language server since yesterday and it was relatively well behaved, using up to only a few hundred MB. This morning, I started making some edits and the language server started consuming 80-100% CPU and the memory spiked up to 5GB. I managed to capture a heap snapshot: heap.zip
Also managed to catch the tail end of the CPU spike: cpu.zip. It looks like it might just be the heap collector though. If it happens again I'll try collect a CPU profile first.
I should also note that this only lasted about a minute or two, and the CPU and memory usage dropped down again.
@doxxx
Both traces show the memory was allocated in the golang.org/x/tools/go/loader
package, which is the entrypoint for type checking. This is unfortunate and a known issue, but expected currently. It'll improve in the future when the official Go language server is released.
If you notice the memory usage does not drop down after a minute or two, that would indicate a leak and a bug we could fix, however.
I just had another occurrence where the go-langserver
processor is at ~20GB, with ~100-300MB/s disk IO and ~100% CPU for about 10 minutes so far.
Here's the heap and CPU graphs: heap_cpu_2.zip
It seems to be the loader package still, although the second dump appears to be involving the build package as well.
Is there nothing that can be done about this in the interim?
I'm using Atom to and its go-langserver integration. Memory usage is close to 18Gb now as I type this. I cannot capture the memory/CPU profiles because the port is not open.
BTW, this was triggered when I changed a function name, and many compilation errors were triggered. After 4 or 5 minutes, memory went down to 1Gb, but my computer was quite slow in the meantime.