Open fbricon opened 10 months ago
thanks @fbricon !
Seems reasonable. If JDT-LS would look like less of a memory hog, that's nice. After initial project import, I would guess a lot of the heap space could be reclaimed.
While on the topic of trimming the heap, there are another set of flags (specific to JDT Core) that I tried out a while ago on https://github.com/eclipse-jdt/eclipse.jdt.core/issues/1526#issuecomment-1788150377 that seemed to have promising results. (update: ugh, but on the downside, they may slow down project import.. needs to be tried)
We can put it in the insiders first and then see if it affects the performance of project import and features such as code completion.
@rgrunber beware that the 2 features act on different fronts:
Arena
used by the JIT will free
every each 5 seconds, hardcoded (see https://github.com/openjdk/jdk/blob/jdk-21%2B35/src/hotspot/share/memory/arena.cpp#L119) and until memory is NOT yet free
, autotrim doesn't have much to trim. Maybe I can speak with the JDK folk to make it tunable as well...xms
to xmx
. The same happen if there's some load for some time....Tuning G1 periodic GC will allow, if "idle" (to be defined/configured) and N seconds, to return back to the minimum (till xms
) heap occupation that will fit with the amount of live data (which should be very few, at idle). This will make G1 able to return it back to the OS (lowering RSS).I ran vscode-java with -XX:NativeMemoryTracking=summary
& then experimented with/without -XX:+UnlockExperimentalVMOptions -XX:TrimNativeHeapInterval=5000
. I just did an import of https://github.com/eclipse-jdtls/eclipse.jdt.ls/ (with vscode-pde installed). I didn't see any noticeable differences under any of the headings for jcmd ${PID} VM.native_memory
.
We do use the parallel GC by default though. See below for the default options we set.
https://github.com/redhat-developer/vscode-java/blob/86bf3ae02f4f457184e6cc217f20240f9882dde9/package.json#L253 . These options were done as part of https://github.com/redhat-developer/vscode-java/pull/1262 . Would be curious to hear your thoughts on something like -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10
as those were 2 options we didn't adopt.
We could potentially, detect if a user has set their own options that include the G1 GC, and if so add our option dynamically prior to starting. We do add some options dynamically in https://github.com/redhat-developer/vscode-java/blob/86bf3ae02f4f457184e6cc217f20240f9882dde9/src/javaServerStarter.ts#L86 .
I didn't see any noticeable differences under any of the headings for jcmd ${PID} VM.native_memory .
In order to benefit of this one specifically you should start an application which does something "interesting enough" at startup (eg Quarkus/spring) triggering some native compilation. Later on, after >5 seconds, the JIT arenas can free some memory back to the OS, and in-between 5 seconds (interleaved or not, hence waiting >10 seconds is the safer choice) you can measure the process RSS as described by https://quarkus.io/guides/performance-measure#platform-specific-memory-reporting
Sadly, native memory tracking isn't currently able to detect the actual RSS footprint, but just what the OS allocator believe to be its live data (malloc'd not freed yet) and not the actual footprint underlying it. @tstuefe can confirm it
I didn't see any noticeable differences under any of the headings for jcmd ${PID} VM.native_memory .
In order to benefit of this one specifically you should start an application which does something "interesting enough" at startup (eg Quarkus/spring) triggering some native compilation. Later on, after >5 seconds, the JIT arenas can free some memory back to the OS, and in-between 5 seconds (interleaved or not, hence waiting >10 seconds is the safer choice) you can measure the process RSS as described by https://quarkus.io/guides/performance-measure#platform-specific-memory-reporting
Sadly, native memory tracking isn't currently able to detect the actual RSS footprint, but just what the OS allocator believe to be its live data (malloc'd not freed yet) and not the actual footprint underlying it. @tstuefe can confirm it
Worse, NMT only tells you how much the hotspot has allocated currently, leaving out native memory consumption from the rest of the process.
Before the very first trim, you can find out how much memory is retained by the glibc by doing
thomas@starfish$ jcmd xxxx VM.info | grep retained
C-Heap outstanding allocations: 46526K, retained: 2104565K (may have wrapped)
This info comes from the glibc itself.
You can also find out RSS via jcmd:
thomas@starfish$ jcmd xxx VM.info | grep -i resident
Resident Set Size: 2681496K (peak: 2683284K) (anon: 2658664K, file: 22832K, shmem: 0K)
Many thanks @tstuefe I believe the option to get RSS out of jcmd
to be a great addition because would work the same regardless being in a container or not!
And adding https://github.com/quarkusio/quarkus/discussions/36691#discussioncomment-7752514 as a positive data point of using trimming, coming from a user.
@maxandersen and @franz1981 taught me java 17 has support for jvm reclaiming memory back to the os:
OpenJDK supports
-XX:+IgnoreUnrecognizedVMOptions
, in case users run an older JDK 17 that doesn't support the new flags, and OpenJ9 simply ignores unknown flags, so it's probably safe to try it out on insider builds.@rgrunber @testforstephen @jdneo WDYT?