Open kylechadha opened 1 year ago
Hi @kylechadha, thanks for reporting. I've been using Metals on macOS and I've never encountered such issue. Does it happens always when you follow to reproduce steps? I can think of one thing currently, maybe your mac is killing some background processes before going to sleep?
Run Doctor
command and paste that piece of information here?
Could you:
killAll java
, thanks to that all unwanted processes will be killedjps
to get all java processesjps
again to check and compare now vs in the pastHI @kpodsiad, thanks for taking a look! Glad to hear it's not expected behavior.
Here's the results of metals doctor:
Metals Doctor
Build server currently being used is Bloop v1.5.4.
Metals Java: 1.8.0_265 from Azul Systems, Inc. located at /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre
Metals Server version: 0.11.9
Performed all the steps in step 2 and see the same results from jps
:
➜ home jps
12346 Server
13084 Jps
12221 Main
➜ home jps
13136 Jps
12346 Server
12221 Main
However I noticed in the output
tab that the workspace was re-compiled when I came back from sleep. There was a short period of time where gotodef wasn't working, but it's back now, so I guess it doesn't always happen deterministically after sleeping 🤔 .
So perhaps it's some combination of sleep + time passing that results in the process getting killed?
Thing with process being killed was just a guess, because I can't see other reason for disconnecting. @tgodzik me if I'm wrong, but Metals doesn't disconnect from build server without a reason (like explicit command from the user)
Thing with process being killed was just a guess, because I can't see other reason for disconnecting. @tgodzik me if I'm wrong, but Metals doesn't disconnect from build server without a reason (like explicit command from the user)
It doesn't disconnect and should try to connect automatically if there is an exception. If you try manually running clean compile, does anything happen in the logs? I remember there was an issue with some bad state that Metals had, which was preventing things from getting compiled :thinking:
I have the same issue with macOS 14 and 15 with M2 Pro and M3 Max.
i've seen the same in the last weeks on linux/x64. i'll try to check whether i can find anything in the logs next time.
This is the output:
Error: Connection is disposed. at throwIfClosedOrDisposed (/Users/username/.vscode/extensions/scalameta.metals-1.39.0/node_modules/vscode-jsonrpc/lib/common/connection.js:832:19) at Object.sendRequest (/Users/username/.vscode/extensions/scalameta.metals-1.39.0/node_modules/vscode-jsonrpc/lib/common/connection.js:1000:13) at LanguageClient.sendRequest (/Users/username/.vscode/extensions/scalameta.metals-1.39.0/node_modules/vscode-languageclient/lib/common/client.js:331:27) at async _provideDocumentSymbols (/Users/username/.vscode/extensions/scalameta.metals-1.39.0/node_modules/vscode-languageclient/lib/common/documentSymbol.js:73:38) at async h.provideDocumentSymbols (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:161:103267)
I've also started to see Metals disconnecting. I think I found yet another issue, this one was with viewing bulk edit changes, but as you can see there is also OOM error.
org.eclipse.lsp4j.jsonrpc.RemoteEndpoint handleNotification
WARNING: Notification threw an exception: {
"jsonrpc": "2.0",
"method": "metals/didFocusTextDocument",
"params": [
"vscode-bulkeditpreview-editor://ddabbe04-0dd0-472e-a541-01120363d78a/path/to/my/scala/file.scala?file%3A%2F%2F%2Fpath%2Fto%2Fmy%2Fscala%2Ffile.scala"
]
}
java.nio.file.FileSystemNotFoundException: Provider "vscode-bulkeditpreview-editor" not installed
at java.base/java.nio.file.Path.of(Path.java:213)
at java.base/java.nio.file.Paths.get(Paths.java:98)
at scala.meta.internal.mtags.MtagsEnrichments$XtensionURIMtags.toAbsolutePath(MtagsEnrichments.scala:130)
at scala.meta.internal.mtags.MtagsEnrichments$XtensionStringMtags.toAbsolutePath(MtagsEnrichments.scala:187)
at scala.meta.internal.metals.MetalsEnrichments$XtensionString.toAbsolutePath(MetalsEnrichments.scala:773)
at scala.meta.internal.metals.MetalsEnrichments$XtensionString.toAbsolutePath(MetalsEnrichments.scala:770)
at scala.meta.internal.metals.WorkspaceLspService.didFocus(WorkspaceLspService.scala:698)
at scala.meta.metals.lsp.DelegatingScalaService.didFocus(DelegatingScalaService.scala:43)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.lambda$recursiveFindRpcMethods$0(GenericEndpoint.java:65)
at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.notify(GenericEndpoint.java:160)
at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.handleNotification(RemoteEndpoint.java:231)
at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.consume(RemoteEndpoint.java:198)
at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.handleMessage(StreamMessageProducer.java:185)
at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.listen(StreamMessageProducer.java:97)
at org.eclipse.lsp4j.jsonrpc.json.ConcurrentMessageProcessor.run(ConcurrentMessageProcessor.java:114)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1583)
Exception in thread "Thread-49" java.lang.OutOfMemoryError: Java heap space
at java.base/jdk.internal.misc.Unsafe.allocateInstance(Native Method)
at java.base/java.lang.invoke.DirectMethodHandle.allocateInstance(DirectMethodHandle.java:501)
at java.base/java.lang.invoke.DirectMethodHandle$Holder.newInvokeSpecial(DirectMethodHandle$Holder)
at java.base/java.lang.invoke.Invokers$Holder.linkToTargetMethod(Invokers$Holder)
at snailgun.protocol.Protocol.$anonfun$createHeartbeatAndShutdownThread$1(Protocol.scala:219)
at snailgun.protocol.Protocol$$Lambda/0x0000008001642960.apply$mcV$sp(Unknown Source)
at snailgun.protocol.Protocol$$anon$1.run(Protocol.scala:302)
Exception in thread "pool-1-thread-207" java.lang.OutOfMemoryError: Java heap space
I'm surprised the developers have not paid more attention to this issue. It's been open since 2022 and renders Metals basically unusable.
Getting oom exception might come from a lot of different places, so this is really a catch all issue. I will take a look at the possible problems here, but you can also try to increase xmx in metals.serverProperties or even use a different GC that can return memory to the system.
If possible, you can also I think set a parameter to dump heap on oom, which might be useful to indentify your issues in particular
@tgodzik I will increase xmx and see if this will improve things. Where can I find this file? But to be honest, the reference development environment for Scala should work out of the box.
@tgodzik I will increase xmx and see if this will improve things. Where can I find this file? But to be honest, the reference development environment for Scala should work out of the box.
As it's an open source project we always accept contributions that can help us achieve that ideal goal.
@tgodzik Where can I find this setting to increase the memory?
That's under metals.serverProperties
That's under
metals.serverProperties
Where can I find this property? Is there a config file?
It's in the usual VS Code settings, where most other extensions and VS Code itself also have their settings.
Setting xmx zu 8 GiB did not improve the situation really.
So there is no info about OOM errors? IT might be a different cause then
Describe the bug
Each time I come back to my machine after it has gone to sleep, I have to reconnect to the metals build server with
Metals: Connect to build server
To Reproduce Steps to reproduce the behavior:
Metals: Connect to build server
, after a minute things start working againExpected behavior
Metals would remain connected or auto-reconnect to build server
Installation: