eclipse-openj9 / openj9

Eclipse OpenJ9: A Java Virtual Machine for OpenJDK that's optimized for small footprint, fast start-up, and high throughput. Builds on Eclipse OMR (https://github.com/eclipse/omr) and combines with the Extensions for OpenJDK for OpenJ9 repo.
Other
3.28k stars 721 forks source link

JIT Code Cache committed value too high #5375

Closed PhilWakelin closed 5 years ago

PhilWakelin commented 5 years ago

After testing issue 4674 I have found that the JIT Code Cache committed value as reported by the MemoryPool MBean is reported as the maximum limit (i.e 256M by defafult) rather than what is actually allocated. This also affects the Java core JIT Code Cache allocation summary data.

With a default 256M JIT Code Cache the value of : 268,435,456 bytes is reported in both cases even though only ~12M was allocated.

pshipton commented 5 years ago

@LinHu2016 can you please look at this regression with priority. Note the testing was done on z/OS.

pshipton commented 5 years ago

@PhilWakelin I removed mention of FP35 since this refers to an IBM release and doesn't apply in the context of this open source project.

What do you mean by " This also affects the Java core JIT Code Cache allocation summary data " ?

The MXBean changes don't affect the javacore, which shouldn't have changed behavior. If it did, there must be something else that changed. The NATIVEMEMINFO has always shown the amount allocated and not the amount in use.

pshipton commented 5 years ago

@joransiu

pshipton commented 5 years ago

Phil shared the following data

new MXBean data: JIT code cache: init=2048K max=262144K used=0K committed=262144K

corresponding javacore data:

3MEMUSER       |  +--JIT Code Cache: 268,435,456 bytes / 128 allocations

1STSEGTOTAL    Total memory:                   268435456 (0x0000000010000000)
1STSEGINUSE    Total memory in use:             14002896 (0x0000000000D5AAD0)
1STSEGFREE     Total memory free:              254432560 (0x000000000F2A5530)
1STSEGLIMIT    Allocation limit:               268435456 (0x0000000010000000)

old MXBean data (from an early Dec build): JIT code cache: init=2048K max=0K used=0K committed=12288K

pshipton commented 5 years ago

@mpirvu did something change related to how the code cache cache is allocated?

LinHu2016 commented 5 years ago

@pshipton Sure

joransiu commented 5 years ago

On V2R3 with RMODE64, we would allocate a 256MB contiguous region for the code-cache to avoid use of trampolines. Question comes down to whether committed value should reflect the memory in use, as RSM only backs up pages with real memory once they are touched.

mpirvu commented 5 years ago

Right. There was no change in how the code cache is allocated. However, in some instances (on V2R3 with RMODE64 ) zOS started to make use of the "code cache consolidation" feature which allocates a 256M "code cache repository" from which individual 2MB code caches are carved.

pshipton commented 5 years ago

I didn't include it above, but I note the code cache is in 31-bit memory according to the segments. Also running on z/OS 02.02.00

1STSEGMENT     0x000000509AFA9B38 0x0000000050500000 0x0000000050521D00 0x0000000050700000 0x00000068 0x0000000000200000
...
1STSEGMENT     0x0000005094AFD678 0x0000000037A00000 0x0000000037A0F338 0x0000000037C00000 0x00000068 0x0000000000200000
PhilWakelin commented 5 years ago

@PhilWakelin I removed mention of FP35 since this refers to an IBM release and doesn't apply in the context of this open source project.

What do you mean by " This also affects the Java core JIT Code Cache allocation summary data " ?

The MXBean changes don't affect the javacore, which shouldn't have changed behavior. If it did, there must be something else that changed. The NATIVEMEMINFO has always shown the amount allocated and not the amount in use.

What I was referring to was the fact that the Java core stated the following:

2MEMUSER       +--JIT: 291,343,423 bytes / 2217 allocations
2MEMUSER       |  |
3MEMUSER       |  +--JIT Code Cache: 268,435,456 bytes / 128 allocations
2MEMUSER       |  |
3MEMUSER       |  +--JIT Data Cache: 8,388,864 bytes / 4 allocations
2MEMUSER       |  |
3MEMUSER       |  +--Other: 14,519,103 bytes / 2085 allocations

This indicates to me that 256M of storage is allocated, whereas previously I would see the cache allocated on demand in 2M . Since this is in 31bit storage (as we are on V2R2) then this is a huge chunk of what is free.

I may be wrong, but looks like the new V2R3 64bit behaviour to allocate the full 256M code cache storage area has crept into V2R2.

If you want me to get some SVCDUMPs just let me know.

LinHu2016 commented 5 years ago

for "committed" memory usage of JIT Code Cache and JIT Data Cache in MXBeans reporting, we use segment->size, it has not been changed for a while, in java core it is the same. but for "used" of memoryUsage currently we use seg->heapAlloc - seg->heapBase, in java core it reports differently (due to The JIT code cache grows from both ends of the segment), in javadump.cpp line3475 writeSegments(), Should we change MXBeans code to match the behavior?

mpirvu commented 5 years ago

Looking at the javacore, at least it's a 64-bit JVM, because we don't allow the code cache consolidation feature on 31/32 bit due to address space limitations.

Maybe the code that is supposed to detect the ability of allocating the repository at higher addresses is not working correctly. @fjeremic may know more about the implementation.

fjeremic commented 5 years ago

@PhilWakelin can you confirm you are running this on V2R2 and this is why you're expecting the code caches to be allocated at 2MB granularity? This is how it's supposed to be intended via the code in [1].

[1] https://github.com/eclipse/omr/blob/4a8f75903244b2c3464ff4acc48240b34061f32a/compiler/control/OMROptions.cpp#L3700-L3716

PhilWakelin commented 5 years ago

@fjeremic Yes, and this is the iplinfo


 IEE254I  16.01.46 IPLINFO DISPLAY 743                      
  SYSTEM IPLED AT 06.58.45 ON 03/18/2019                    
  RELEASE z/OS 02.02.00    LICENSE = z/OS                   
  USED LOADP2 IN SYS1.PARMLIB ON 01243                      
  ARCHLVL = 2   MTLSHARE = N                                
  IEASYM LIST = (00,2C)                                     
  IEASYS LIST = 2C (OP)                                     
  IODF DEVICE: ORIGINAL(01243) CURRENT(01243)               
  IPL DEVICE: ORIGINAL(01240) CURRENT(01240) VOLUME(P2SY05) ```
PhilWakelin commented 5 years ago

This is what a code cache looks like using the prior fix pack of Java8, running on the same machine

NULL           segment            start              alloc              end                type       size
1STSEGMENT     0x00000050CB479D78 0x0000000065800000 0x0000000065832908 0x0000000065A00000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050CF05C5F8 0x0000000064300000 0x00000000644FAA78 0x0000000064500000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050CF05C538 0x0000000063F00000 0x00000000640FEEF8 0x0000000064100000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050CF05C478 0x0000000063B00000 0x0000000063CFEBA8 0x0000000063D00000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050CBF43FF8 0x0000000063800000 0x00000000639FFFD8 0x0000000063A00000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050CBF43F38 0x0000000063100000 0x00000000632FFC58 0x0000000063300000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050CBF43E78 0x0000000062B00000 0x0000000062CFFFC0 0x0000000062D00000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050C90009F8 0x0000000062500000 0x00000000626FFDC8 0x0000000062700000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050C9000938 0x0000000061A00000 0x0000000061BFFD08 0x0000000061C00000 0x00000068 0x0000000000200000
1STSEGMENT     0x00000050C9000878 0x0000000060500000 0x00000000606FFC98 0x0000000060700000 0x00000068 0x0000000000200000
NULL
1STSEGTOTAL    Total memory:                    20971520 (0x0000000001400000)
1STSEGINUSE    Total memory in use:             19046776 (0x000000000122A178)
1STSEGFREE     Total memory free:                1924744 (0x00000000001D5E88)
1STSEGLIMIT    Allocation limit:                33554432 (0x0000000002000000)

Note in this example Xcodecachetotal32M, but the only difference between this and the default 256M setting is the Allocation limit is lower.

fjeremic commented 5 years ago

I just checked the master branch built on top of z/OS V2R2 and I see the following by running:

bin/java -Xdump:java:events=vmstop -Xcodecachetotal32M -version
1STSEGTYPE     JIT Code Cache
NULL           segment            start              alloc              end                type       size
1STSEGMENT     0x00000050192F32F8 0x0000000035590000 0x0000000035596790 0x0000000035790000 0x00000068 0x0000000000200000
NULL
1STSEGTOTAL    Total memory:                     2097152 (0x0000000000200000)
1STSEGINUSE    Total memory in use:                26512 (0x0000000000006790)
1STSEGFREE     Total memory free:                2070640 (0x00000000001F9870)
1STSEGLIMIT    Allocation limit:                33554432 (0x0000000002000000)

The detection logic should work as intended on z/OS V2R2. As far as RMODE64 is concerned we seem to be doing exactly what we're supposed to, i.e. if RMODE64 is not supported we revert back to allocating 2MB chunks for each code cache.

It seems in your original javacore you had 128 of these 2MB chunks and we reached the upper limit, yet the amount of JIT code cache actually consumed is very low. I'm not sure how this is possible as I'm not familiar with this area of the JIT. @mpirvu I'm going to forward you the original javacore to take a look at this.

fjeremic commented 5 years ago

Looking at the original javacore it seems we did not allocate the entire codecache in one go, and then spliced up the code cache into 2MB chunks. It seems we allocated each individual 2MB chunk one at a time. We can see this because the distance between the first and last code cache segment is ~414MB which is way over our limit of 256MB. You can also see there are "breaks" in between some of the code cache segments, i.e. the end address of the last segment is not the start address of the next one. This indicates there was other "stuff" allocated in between which would not be indicative of a large 256MB internal segment chunk being allocated.

mpirvu commented 5 years ago

Indeed, there are 128 individual code caches that have little code in them. We have a theory about that: to deal with multithreading a JIT compilation thread reserves a code cache before laying down the code. At the end of the compilation the code cache is un-reserved. If that doesn't happen due to a bug, then the code cache will remain reserved and nobody can write to it even though it may have free space.

Setting this env var export TR_PrintCompMem=1 will make the JIT write some diagnostic to stderr, at JVM shutdown. Example:

cache 0x7f76f40ba9b0 has 2080096 bytes empty
Code Cache @0x7f76f40ba9b0 flags=0x0 almostFull=0
   cold-warm hole size        =  2080096 bytes
   warmCodeAlloc=0x7f76de202f40 coldCodeAlloc=0x7f76de3feca0
   trampoline free space = 0 (temp=0)

If caches remain in reserved state the printout above will clearly indicate it. @PhilWakelin Phil, assuming the problem is reproducible, could you please run the diagnostic suggested above?

PhilWakelin commented 5 years ago

Indeed, there are 128 individual code caches that have little code in them. We have a theory about that: to deal with multithreading a JIT compilation thread reserves a code cache before laying down the code. At the end of the compilation the code cache is un-reserved. If that doesn't happen due to a bug, then the code cache will remain reserved and nobody can write to it even though it may have free space.

Setting this env var export TR_PrintCompMem=1 will make the JIT write some diagnostic to stderr, at JVM shutdown. Example:

cache 0x7f76f40ba9b0 has 2080096 bytes empty
Code Cache @0x7f76f40ba9b0 flags=0x0 almostFull=0
   cold-warm hole size        =  2080096 bytes
   warmCodeAlloc=0x7f76de202f40 coldCodeAlloc=0x7f76de3feca0
   trampoline free space = 0 (temp=0)

If caches remain in reserved state the printout above will clearly indicate it. @PhilWakelin Phil, assuming the problem is reproducible, could you please run the diagnostic suggested above?

@mpirvu I reran with the diagnostics, and have attached the output stderr.txt

For ref, the JIT Code Cache only grows to 154M today not to the full 256M, here are the stats from my MemoryPool MBean:

JIT code cache: init=2048K max=262144K used=0K committed=157696K

PhilWakelin commented 5 years ago

I thought I'd check to see if this problem causes JITing to stop if the Code Cache becomes full

I ran with a 64M cache as this would normally be plenty big enough for the workload -Xjit:noResumableTrapHandler,verbose -Xcodecachetotal64M

and saw the following: #INFO: <WARNING: JIT CACHES FULL> Disable further compilation

mpirvu commented 5 years ago

Indeed, most caches appear to be in reserved state by compThread 0, so there is a bug where reservation is not cancelled at the end of a compilation. Could you please do another run with -Xjit:verbose={compilationPerformance},verbose={codecache},vlog=yourvlog ? Maybe this will show some unexpected behavior in your environment.

PhilWakelin commented 5 years ago

Here is the jitlog z32a.jitlog.20190408.145539.83952145.txt

mpirvu commented 5 years ago

Looking at the verbose log I see that every time the JIT allocates a new code cache we have an AOT failure of type compilationAOTNoSupportForAOTFailure

Example ! java/lang/String.toLowerCaseCore(Ljava/lang/String;)Ljava/lang/String; time=78us compilationAOTNoSupportForAOTFailure memLimit=262144 KB freePhysicalMemory=16866 MB mem=[region=320 system=16384]KB

I am not familiar with this error so it must be something new. I'll look into it.

fjeremic commented 5 years ago

Looking at the verbose log I see that every time the JIT allocates a new code cache we have an AOT failure of type compilationAOTNoSupportForAOTFailure

@dsouzai FYI. Is this something related to SVM?

dsouzai commented 5 years ago

@dsouzai FYI. Is this something related to SVM?

I doubt it; there's 6 places J9::AOTNoSupportForAOTFailure is thrown, of which 4 of them are related to field watch. 1 is thrown when relocating an inlined method, and 1 is in the constructor of TR_VMFieldsInfo. Nothing about these locations have anything to do with whether the SVM is active or not.

dsouzai commented 5 years ago

Looking at the verbose log I see that every time the JIT allocates a new code cache we have an AOT failure of type compilationAOTNoSupportForAOTFailure

I'm not sure that's necessarily true, for example:

#CODECACHE:  The code cache repository was allocated between addresses 0000000000000000 and 000000007FFFFFFF
#CODECACHE:  allocated code cache segment of size 2097152
#CODECACHE:  CodeCache allocated 000000509669BD30 @ 0x0000000047D00000-0x0000000047F00000 HelperBase:0x0000000047F00000
+ (cold) java/lang/Object.toString()Ljava/lang/String; @ 0x0000000047D000FA-0x0000000047D00244 OrdinaryMethod - Q_SZ=3 Q_SZI=3 QW=5 j9m=00000050923C7E90 bcsz=36 time=1783us mem=[region=1280 system=16384]KB compThread=0
 (cold) Compiling org/eclipse/osgi/internal/framework/BundleContextImpl.addServiceListener(Lorg/osgi/framework/ServiceListener;Ljava/lang/String;)V  OrdinaryMethod j9m=000000509421C070 t=3102 compThread=0 memLimit=262144 KB freePhysicalMemory=16590 MB
+ (AOT load) org/eclipse/osgi/internal/framework/BundleContextImpl.addServiceListener(Lorg/osgi/framework/ServiceListener;Ljava/lang/String;)V @ 0x0000000047D002D6-0x0000000047D00A34 Q_SZ=2 Q_SZI=2 QW=4 j9m=000000509421C070 bcsz=30 time=83us compThread=0
 (cold) Compiling java/util/ComparableTimSort.mergeHi(IIII)V  OrdinaryMethod j9m=000000509317C920 t=3102 compThread=0 memLimit=262144 KB freePhysicalMemory=16590 MB
+ (AOT load) java/util/ComparableTimSort.mergeHi(IIII)V @ 0x0000000047D00ABA-0x0000000047D03C6C Q_SZ=1 Q_SZI=1 QW=3 j9m=000000509317C920 bcsz=729 time=181us compThread=0
 (AOT warm) Compiling java/io/ExpiringCache.entryFor(Ljava/lang/String;)Ljava/io/ExpiringCache$Entry;  OrdinaryMethod j9m=000000509294EC40 t=3102 compThread=0 memLimit=262144 KB freePhysicalMemory=16590 MB
+ (AOT warm) java/io/ExpiringCache.entryFor(Ljava/lang/String;)Ljava/io/ExpiringCache$Entry; @ 0x0000000047D03CF4-0x0000000047D040E4 OrdinaryMethod - Q_SZ=1 Q_SZI=1 QW=3 j9m=000000509294EC40 bcsz=57 time=1944us mem=[region=2112 system=16384]KB compThread=0

It might just be coincidence

mpirvu commented 5 years ago

We should look at what happened before the allocation

! java/lang/Object.toString()Ljava/lang/String; time=174us compilationAOTNoSupportForAOTFailure memLimit=262144 KB freePhysicalMemory=16590 MB mem=[region=320 system=16384]KB
 (cold) Compiling java/lang/Object.toString()Ljava/lang/String;  OrdinaryMethod j9m=00000050923C7E90 t=3102 compThread=0 memLimit=262144 KB freePhysicalMemory=16590 MB
#CODECACHE:  The code cache repository was allocated between addresses 0000000000000000 and 000000007FFFFFFF
#CODECACHE:  allocated code cache segment of size 2097152
#CODECACHE:  CodeCache allocated 000000509669BD30 @ 0x0000000047D00000-0x0000000047F00000 HelperBase:0x0000000047F00000
fjeremic commented 5 years ago

@PhilWakelin are you able to provide us with an environment to easily reproduce this? Or a trimmed down test case which reproduces the behaviour? I think it may speedup the back-forth in the investigation given we are in different timezones.

dsouzai commented 5 years ago
      // Unreserve the code cache used for this compilation
      //
      // NOTE: Although unintuitive, it is possible to reach here without an allocated
      // CodeGenerator.  This can happen during AOT loads.  See the OMR::Compilation
      // constructor for details.
      //
      if (_compiler->cg() && _compiler->cg()->getCodeCache())
         {
         _compiler->cg()->getCodeCache()->unreserve();
         _compiler->cg()->setCodeCache(0);
         }
...

I wonder if this is the problem

DanHeidinga commented 5 years ago

This is in the 0.14 release plan which is targeted to ship within 48 hours of OpenJDK (April 16).

@dsouzai @fjeremic @mpirvu What's the current status of this? We're running out of time to get this into the release.

pshipton commented 5 years ago

Note if this is z/OS specific it can be removed from the 0.14 plan. However if it is the cause of the known footprint regression, it needs to be fixed.

mpirvu commented 5 years ago

Irwin has a hunch of what's going on and he will test his hypothesis soon.

dsouzai commented 5 years ago

I was discussing this with @mpirvu yesterday and realized that there is a scenario where we might end up not unreserving a code cache. As noted above in https://github.com/eclipse/openj9/issues/5375#issuecomment-480883622, the compiler will unreserve a code cache if it has a codegen object. However, this would mean that all AOT load failures wouldn't unreserve the cache; however the caches do get unreserved. Digging into the code, I found that it happens here: https://github.com/eclipse/openj9/blob/9b025659d317ad1a912640e4d40e07a4de683530/runtime/compiler/runtime/RelocationRuntime.cpp#L357-L359

Thanks to @mpirvu's observation that the JIT reserves a new code cache whenever we fail with compilationAOTNoSupportForAOTFailure, I looked at where we throw the exception that generates that error code (described in https://github.com/eclipse/openj9/issues/5375#issuecomment-480861790). The compilation abort here https://github.com/eclipse/openj9/blob/9b025659d317ad1a912640e4d40e07a4de683530/runtime/compiler/runtime/RelocationRecord.cpp#L946 is not consistent with the way error handling normally occurs in the relocation infrastructure. The reason is something I won't go into here, but basically because of this, we end up not going through the code path for an AOT load failure that unreserves the code cache.

This should be relatively easy to fix. I'll open a PR soon.

dsouzai commented 5 years ago

Was able to reproduce this by consistently throwing an exception in TR_RelocationRecordWithInlinedSiteIndex::ignore:

 (cold) Compiling java/lang/String.lengthInternal()I  OrdinaryMethod j9m=0000000001B51D00 t=133 compThread=0 memLimit=262144 KB freePhysicalMemory=5306 MB
! java/lang/String.lengthInternal()I time=117us compilationAOTNoSupportForAOTFailure memLimit=262144 KB freePhysicalMemory=5306 MB mem=[region=64 system=16384]KB
 (cold) Compiling java/lang/String.lengthInternal()I  OrdinaryMethod j9m=0000000001B51D00 t=133 compThread=0 memLimit=262144 KB freePhysicalMemory=5306 MB
#CODECACHE:  carved size=2097152 range: 00007F7138C00000-00007F7138E00000
#CODECACHE:  CodeCache allocated 00007F71340086A0 @ 00007F7138C00000-00007F7138E00000 HelperBase:00007F7138DFF220
...
 (cold) Compiling java/lang/String.charAt(I)C  OrdinaryMethod j9m=0000000001B518E0 t=153 compThread=0 memLimit=262144 KB freePhysicalMemory=5306 MB
! java/lang/String.charAt(I)C time=74us compilationAOTNoSupportForAOTFailure memLimit=262144 KB freePhysicalMemory=5306 MB mem=[region=64 system=16384]KB
 (cold) Compiling java/lang/String.charAt(I)C  OrdinaryMethod j9m=0000000001B518E0 t=153 compThread=0 memLimit=262144 KB freePhysicalMemory=5306 MB
#CODECACHE:  carved size=2097152 range: 00007F7138E00000-00007F7139000000
#CODECACHE:  CodeCache allocated 00007F7134012E20 @ 00007F7138E00000-00007F7139000000 HelperBase:00007F7138FFF220
+ (cold) java/lang/String.charAt(I)C @ 00007F7138E00060-00007F7138E00180 OrdinaryMethod - Q_SZ=3 Q_SZI=3 QW=20 j9m=0000000001B518E0 bcsz=66 GCR time=821us mem=[region=1088 system=16384]KB compThread=0 CpuLoad=16%(4%avg) JvmCpu=0%
 (cold) Compiling com/ibm/jit/JITHelpers.getCharFromArrayByIndex(Ljava/lang/Object;I)C  OrdinaryMethod j9m=0000000001B55360 t=153 compThread=0 memLimit=262144 KB freePhysicalMemory=5306 MB
! com/ibm/jit/JITHelpers.getCharFromArrayByIndex(Ljava/lang/Object;I)C time=41us compilationAOTNoSupportForAOTFailure memLimit=262144 KB freePhysicalMemory=5306 MB mem=[region=64 system=16384]KB
 (cold) Compiling com/ibm/jit/JITHelpers.getCharFromArrayByIndex(Ljava/lang/Object;I)C  OrdinaryMethod j9m=0000000001B55360 t=153 compThread=0 memLimit=262144 KB freePhysicalMemory=5306 MB
#CODECACHE:  carved size=2097152 range: 00007F7139000000-00007F7139200000
#CODECACHE:  CodeCache allocated 00007F7134008870 @ 00007F7139000000-00007F7139200000 HelperBase:00007F71391FF220
dsouzai commented 5 years ago

With the fix in https://github.com/eclipse/openj9/pull/5445 the problem goes away (I forced failures to ensure we don't reserve a new code cache (i.e. the "Returning INTERNAL ERROR" line))

 (cold) Compiling java/lang/String.charAt(I)C  OrdinaryMethod j9m=0000000001BD48E0 t=149 compThread=0 memLimit=262144 KB freePhysicalMemory=5268 MB
#INFO:  Returning INTERNAL ERROR
! java/lang/String.charAt(I)C time=63us compilationAotClassReloFailure memLimit=262144 KB freePhysicalMemory=5268 MB mem=[region=64 system=16384]KB
 (cold) Compiling java/lang/String.charAt(I)C  OrdinaryMethod j9m=0000000001BD48E0 t=149 compThread=0 memLimit=262144 KB freePhysicalMemory=5268 MB
+ (cold) java/lang/String.charAt(I)C @ 00007F977CA02940-00007F977CA02A60 OrdinaryMethod - Q_SZ=2 Q_SZI=2 QW=8 j9m=0000000001BD48E0 bcsz=66 GCR time=657us mem=[region=1088 system=16384]KB compThread=0 CpuLoad=16%(4%avg) JvmCpu=0%
 (cold) Compiling com/ibm/jit/JITHelpers.getCharFromArrayByIndex(Ljava/lang/Object;I)C  OrdinaryMethod j9m=0000000001BD8360 t=149 compThread=0 memLimit=262144 KB freePhysicalMemory=5268 MB
#INFO:  Returning INTERNAL ERROR
! com/ibm/jit/JITHelpers.getCharFromArrayByIndex(Ljava/lang/Object;I)C time=155us compilationAotClassReloFailure memLimit=262144 KB freePhysicalMemory=5268 MB mem=[region=64 system=16384]KB
PhilWakelin commented 5 years ago

Tested on FP36 and original testcase now shows committed=20480K during normal operation which is as expected for the JIT code cache.

fzyzcjy commented 2 years ago

Hi, I also see commited being >200MB even if it only uses <10MB. Is there any updates?

In addition, I wonder whether it really uses that much memory, or just the metric is wrong.

image

dsouzai commented 2 years ago

@fzyzcjy can you run with -Xjit:verbose={compilationPerformance},verbose={codecache},vlog=yourvlog and attach the vlog? Also can you provide the JVM version?

Might also be worth doing a kill -3 and getting the generated javacore file.

fzyzcjy commented 2 years ago

Thanks I will try it later

mpirvu commented 2 years ago

@fzyzcjy On all 64-bit platforms the JIT allocates a 256MB repository for the code cache. While the memory is allocated, it won't become part of the resident set (physical memory) unless pages are touched. One easy test could be with -Xjit:disableCodeCacheConsolidation. If you still see a large committed value for the code cache, then we have a bug. Otherwise, the behavior is normal.