microsoft / azure-devops-intellij

IntelliJ IDEA plug-in for Visual Studio Team Services and Team Foundation Server (TFS)
MIT License
152 stars 94 forks source link

Very high memory consumption by Reactive client #355

Closed streamstr closed 4 years ago

streamstr commented 4 years ago

Rider version: 2020.1.2 Plugin version: 1.159.0

Java process, which hosts Reactive client backend, uses 3+ GB of RAM: image

Here are details of process: image

Memory info: image

Reactive clienbt is enabled in plugin settings: image

Restarting Rider does nothing to memory consumption: when Rider is started this process is spawned and already uses same amount of memory. Also, enabling reactive client does not speed up any VCS operations compared to classic client or Visual Studio. Operations like get latest version, add files or check-in are VERY slow.

ForNeVeR commented 4 years ago

This is very interesting. Could you please send your logs (Help → Collect Logs) to [REDACTED]@jetbrains.com? They could sched some light on what's happening.

It may be a very big TFS repository and/or too many requests from IDE, but it may also just be some programming mistake.

Also, from the logs I'll be able to see which operations were used from the reactive client.

It definitely doesn't implement "Add" operation (yet!), but should improve some cases with "get latest version". Could you please clarify what you mean by "get latest version"?

streamstr commented 4 years ago

I've sent logs to email you provided. Hope they will help. About get latest version - in terms of plugin it is "Update project" - command that gets latest version of code from server to local workspace.

streamstr commented 4 years ago

@ForNeVeR , Hello, any news on this? This issue is very important for me.

streamstr commented 4 years ago

@ForNeVeR , Hello, any news on this? This issue is very important for me.

ForNeVeR commented 4 years ago

Okay, I've taken a look at the logs and available information, and it is not clear (yet) what's happening here. The reactive client is being used as it should, and it works pretty fast.

Currently, I have a couple of hypotheses on what's happening:

  1. Your repository may be too big, so client really allocates a lot of memory
  2. You just have a huge amount of memory on your machine, and JVM just starts using it a lot (since we haven't defined any limits in backend.cmd)

To validate these hypotheses, please do the following:

  1. Find jmap command-line tool (usually it comes with the JDK), and execute jmap -dump:format=b,file=heap.bin <pid>, where <pid> is the process id of the Java process in question
  2. Please tell, how much RAM does your computer have? By default, Java could use up to 1/4 of the whole RAM for -Xmx
    • try to define a global environment variable BACKEND_OPTS="-Xmx1024m", restart your machine, and then open the repository in the IDE: now the reactive client will be limited to 1 GiB of memory. (BACKEND_OPTS gets passed down to the started JVM, and will only be used by the reactive client)

If the latter works, then we'll add a setting to limit the backend memory consumption, and the problem will disappear.

streamstr commented 4 years ago

@ForNeVeR , thanks. I have 32 GB RAM onboard. I tried your second suggestion and at first sight it helped: Now working set of process is about 800 MB and not growing. I performed Refresh VCS history and Refresh local changes several times, and after that process is consuming about 920 MB. And when I do nothing VCS-related for several minutes, working set is dropping to about 800 MB and even lower.

This is very good news for me, but I want to monitor memory consumption and VCS actions behavior for 1-2 work days (when VCS operations have real amount and frequency) to see, will there be any problems. Please do not close this issue till i report that everything is ok. Also, could this memory restriction potentially lead to OutOfMemory errors?

ForNeVeR commented 4 years ago

could this memory restriction potentially lead to OutOfMemory errors?

Yes, it could, but it could've anyway because this is how JVM works (even if there's no defined limit, it has to choose something as default). After a brief investigation, I believe that 2048 MiB is a reasonable default for TFS client, since this is what TF Everywhere command-line client uses.

Also, I have no plans to close the issue until we implement a corresponding setting and impose a default limit.

streamstr commented 4 years ago

Ok, thank you. Also I've noticed that Refresh VCS history operation is performing now a bit faster (still relatively long but faster than before setting memory limit).

Besides, maybe you know, when new release of plugin is planned (approximately)? I'm waiting new version very very much (server workspaces support and improved add files performance are critical for me in everyday work)

ForNeVeR commented 4 years ago

All the changes that were planned for this release have been merged, so we're now working on quality assurance. I'd say that release will probably be published next week, if no critical issues are found.