scalameta / metals

Scala language server with rich IDE features 🚀
https://scalameta.org/metals/
Apache License 2.0
2.08k stars 325 forks source link

Scala Metals for Emacs: Memory consumption #909

Closed nloyola closed 4 years ago

nloyola commented 5 years ago

Describe the bug metals-emacs restarts multiple times without closing previous session. All the memory on my computer gets used up.

I've installed Bloop and I am running a bloop server. metals-emacs is connecting to my bloop session.

Here is what my *lsp-log* shows:

running installed 'bloop bsp --protocol local --socket /tmp/bsp4720396830048745692/-wjtf7ve9kpce.socket'
Command "metals-emacs" is present on the path.
Found the following clients for /home/nelson/src/cbsr/scala/bbweb/test/org/biobank/matchers/EntityMatchers.scala: (server-id metals, priority -1)
The following clients were selected based on priority: (server-id metals, priority -1)
Command "metals-emacs" is present on the path.
Found the following clients for /home/nelson/src/cbsr/scala/bbweb/test/org/biobank/domain/participants/SpecimenSpec.scala: (server-id metals, priority -1)
The following clients were selected based on priority: (server-id metals, priority -1)
Command "metals-emacs" is present on the path.
Found the following clients for /home/nelson/src/cbsr/scala/bbweb/test/org/biobank/domain/containers/ContainerSchemaSpec.scala: (server-id metals, priority -1)
The following clients were selected based on priority: (server-id metals, priority -1)
Command "metals-emacs" is present on the path.
Found the following clients for /home/nelson/src/cbsr/scala/bbweb/test/org/biobank/domain/containers/ContainerSpec.scala: (server-id metals, priority -1)
The following clients were selected based on priority: (server-id metals, priority -1)
Command "metals-emacs" is present on the path.
Found the following clients for /home/nelson/src/cbsr/scala/bbweb/test/org/biobank/domain/containers/ContainerSchemaPositionSpec.scala: (server-id metals, priority -1)
The following clients were selected based on priority: (server-id metals, priority -1)
Command "metals-emacs" is present on the path.

Also, there are over 20 processes running emacs-metals on my computer.

To Reproduce Use Emacs normally to develop my Scala application.

Expected behavior The previous session should be terminated before opening up a new session.

Installation:

Search terms scala metals emacs memory consumption

tgodzik commented 5 years ago

Thanks for reporting! Any chance you can check which processes are taking up the memory? For example with ps aux | grep metals?

nloyola commented 5 years ago

Here is the info on the processes:

> ps aux | grep metals
nelson    6305  0.0  0.0  11848   920 pts/2    S+   15:38   0:00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn metals
nelson   29157  3.6 10.3 10046576 1696296 ?    Ssl  14:12   3:08 java -Xss4m -Xms100m -Dmetals.client=emacs -jar /usr/local/bin/metals-emacs

When I said I saw over 20 processes running emacs-metals that was what I saw with htop.

avdv commented 5 years ago

When I said I saw over 20 processes running emacs-metals that was what I saw with htop.

Those were probably just threads. htop > Setup > Display options > Hide userland process threads.

Maybe you also want to upgrade to 0.7.5?

nloyola commented 5 years ago

Thanks for the suggestions @avdv.

I just upgraded to 0.7.5+2-f4421785-SNAPSHOT and I'm seeing the same.

When I quit Emacs, all the memory is released.

tgodzik commented 5 years ago

Thanks for the suggestions @avdv.

I just upgraded to 0.7.5+2-f4421785-SNAPSHOT and I'm seeing the same.

When I quit Emacs, all the memory is released.

Metals itself seems to be using around 10% of memory, which is a bit, but can be normal for larger projects. It doesn't seem that it's using the whole memory. Maybe you can check what is using the most memory using:

ps aux --sort=-pmem
nloyola commented 5 years ago

With version 0.7.5+2-f4421785-SNAPSHOT the memory consumption does not increase as fast, but it still gets very high. Previously it would increase to very high in about 30 minutes. Now it takes over 3 hours.

Here is the output of ps aux --sort=-pmem:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
nelson    9237  0.5 21.3 10601456 3487380 pts/1 Sl+ Sep12   9:14 java -jar /home/nelson/.bloop/blp-server
nelson    8551  0.6 14.7 10179368 2418356 ?    Ssl  Sep12  10:31 java -Xss4m -Xms100m -Dmetals.client=emacs -jar /usr/local/bin/metals-emacs
tgodzik commented 4 years ago

With version 0.7.5+2-f4421785-SNAPSHOT the memory consumption does not increase as fast, but it still gets very high. Previously it would increase to very high in about 30 minutes. Now it takes over 3 hours.

Here is the output of ps aux --sort=-pmem:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
nelson    9237  0.5 21.3 10601456 3487380 pts/1 Sl+ Sep12   9:14 java -jar /home/nelson/.bloop/blp-server
nelson    8551  0.6 14.7 10179368 2418356 ?    Ssl  Sep12  10:31 java -Xss4m -Xms100m -Dmetals.client=emacs -jar /usr/local/bin/metals-emacs

How high can the usage get? is 21 % and 14 % the highest it gets or does it get worse than that? Do you have a project where we can reproduce the issue? Running jmap -heap PID on both of the highest processes could be helpful here.

It's similar on my machine:

tgodzik   4180  4.6 13.0 12283480 2093040 ?    Sl   14:48   7:05 java -jar /home/tgodzik/.bloop/blp-server
tgodzik   7385  3.3 11.6 11997252 1874344 tty2 Sl+  14:53   4:49 /usr/lib/jvm/java-8-oracle/bin/java -Dmetals.input-box=on -Dm

and I am afraid it might just be currently bad for some projects.

PS: Sorry for not replying last week, was a bit busy with some new features.

nloyola commented 4 years ago

It took a while for the memory usage to go up now that I'm using version 0.7.5+21-bfe3ff5b-SNAPSHOT.

This is the output of ps aux --sort=-pmem again:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
nelson   24530  0.6 21.7 10684428 3561772 pts/3 Sl+ Sep23   8:01 java -jar /home/nelson/.bloop/blp-server
nelson   22696  0.9 17.0 10318376 2785544 ?    Ssl  Sep23  11:10 java -Xss4m -Xms100m -Dmetals.client=emacs -jar /usr/local/bin/metals-emacs

Here is the output of jmap for the bloop server:

Attaching to process ID 24530, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.221-b11

using thread-local object allocation.
Parallel GC with 8 thread(s)

Heap Configuration:
   MinHeapFreeRatio         = 0
   MaxHeapFreeRatio         = 100
   MaxHeapSize              = 4185915392 (3992.0MB)
   NewSize                  = 87031808 (83.0MB)
   MaxNewSize               = 1395130368 (1330.5MB)
   OldSize                  = 175112192 (167.0MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 0 (0.0MB)

Heap Usage:
PS Young Generation
Eden Space:
   capacity = 714604544 (681.5MB)
   used     = 356811760 (340.28221130371094MB)
   free     = 357792784 (341.21778869628906MB)
   49.931358958725006% used
From Space:
   capacity = 236453888 (225.5MB)
   used     = 203806832 (194.36534118652344MB)
   free     = 32647056 (31.134658813476562MB)
   86.19305595854698% used
To Space:
   capacity = 251133952 (239.5MB)
   used     = 0 (0.0MB)
   free     = 251133952 (239.5MB)
   0.0% used
PS Old Generation
   capacity = 1053818880 (1005.0MB)
   used     = 404093008 (385.3731231689453MB)
   free     = 649725872 (619.6268768310547MB)
   38.34558439491993% used

17877 interned Strings occupying 2174496 bytes.

Here is the output of jmap for the metals-emacs:

Attaching to process ID 22696, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.221-b11

using thread-local object allocation.
Parallel GC with 8 thread(s)

Heap Configuration:
   MinHeapFreeRatio         = 0
   MaxHeapFreeRatio         = 100
   MaxHeapSize              = 4185915392 (3992.0MB)
   NewSize                  = 34603008 (33.0MB)
   MaxNewSize               = 1395130368 (1330.5MB)
   OldSize                  = 70254592 (67.0MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 0 (0.0MB)

Heap Usage:
PS Young Generation
Eden Space:
   capacity = 442499072 (422.0MB)
   used     = 14033424 (13.383316040039062MB)
   free     = 428465648 (408.61668395996094MB)
   3.1714019052225266% used
From Space:
   capacity = 276824064 (264.0MB)
   used     = 17866672 (17.038986206054688MB)
   free     = 258957392 (246.9610137939453MB)
   6.454161441687382% used
To Space:
   capacity = 299368448 (285.5MB)
   used     = 0 (0.0MB)
   free     = 299368448 (285.5MB)
   0.0% used
PS Old Generation
   capacity = 1115684864 (1064.0MB)
   used     = 757000520 (721.9319534301758MB)
   free     = 358684344 (342.0680465698242MB)
   67.85074750283607% used

19249 interned Strings occupying 2201504 bytes.
olafurpg commented 4 years ago

@nloyola thank you for the information. There should only be one instance of Metals running per-workspace. Judging by the logs in your original description, it seems like lsp-mode is starting a new Metals server for each Scala source file. Could that be true? cc/ @JesusMtnez

To get a better understanding of the memory usage. Could you maybe share some screenshots from jvisualvm? For example one like this

Screenshot 2019-09-24 at 10 17 42

And another one like this

Screenshot 2019-09-24 at 10 19 20

Judging by the second screenshot, the presentation compiler seems to be holding onto a large number of symbols that we might be able recover somehow 🤔

tgodzik commented 4 years ago

@nloyola thank you for the information. There should only be one instance of Metals running per-workspace. Judging by the logs in your original description, it seems like lsp-mode is starting a new Metals server for each Scala source file. Could that be true?

It does seem that only one is running though, so I don't think it's the issue.

nloyola commented 4 years ago

Here are the screenshots from jvisualvm:

Screenshot from 2019-09-24 13-51-47

Screenshot from 2019-09-24 13-51-53

In this particular case my memory usage was at 4 GB before I started Emacs. After about 2 hours of working on my project in Emacs the usage was at 9.6 GB.

avdv commented 4 years ago

@nloyola I am also an Emacs user, and really don't see anything unusual here.

By default the JVM uses 1/4 of RAM max for the heap. Your heap is 1.5GB in size, of which around 600MB are in use, on average. I think that's pretty decent. (it would be interesting to know how much RSS ps shows for this process)

GC is happening only every other 10 minutes or so, because the JVM has plenty of free heap available and there's no (urgent) need to clean up.

Also, the RSS of your other process was roughly 2.7GB. There must be other processes on your machine taking up your memory, if you say that 9.6 GB are used.

If you want to avoid the JVM using up to 4GB of heap, add the -Xmx????m option to the coursier command when you build emacs-metals.

nloyola commented 4 years ago

This is the project I'm working on: https://github.com/nloyola/bbweb.

I'm now using version 0.7.6+4-82acc8bf-SNAPSHOT and unfortunately still see memory usage going up.

olafurpg commented 4 years ago

I'm unable to reproduce memory usage problems. The screenshot

shows a memory usage of ~500mb for the Metals process, which is high but not abnormal. It's normal that the blue region slowly grows while the garbage collector isn't running.

You can configure -Xmx to reduce the maximum heap size (the yellow region). Additionally, you can try to use the JVM options -XX:+UseG1GC -XX:+UseStringDeduplication further instruct the JVM to intern duplicate strings, which may help reduce the memory usage.