Closed zakkak closed 2 months ago
The issue is possibly related to https://github.com/oracle/graal/issues/3697
After some more runs it looks like the difference is closer to 270s vs220s which is still significant. And the issue is reproducible with Mandrel 21.0.0.1-Final as well.
Note the difference in heap space. Shorter runs use more heap.
After some more experimentation it looks like this is related to the fact that Mandrel is not using libgraal.
To explore that, I performed a number of runs with GraalVM CE 21.2 disabling libgraal (-XX:-UseJVMCINativeLibrary
) and the Graal JIT (-XX:-UseJVMCICompiler
).
The results are listed in the table bellow (3 runs of each configuration):
Run | Mandrel 21.2.0.0-Final | GraalVM CE 21.2.0 | GraalVM CE -XX:-UseJVMCINativeLibrary |
GraalVM CE -XX:-UseJVMCINativeLibrary -XX:-UseJVMCICompiler |
---|---|---|---|---|
1 | 270,740.46 ms 7.75 GB |
219,455.65 ms 7.65 GB |
299,634.69 ms 7.63 GB |
251,934.78 ms 7.89 GB |
2 | 268,983.27 ms 7.41 GB |
219,422.77 ms 7.48 GB |
293,751.67 ms 7.48 GB |
248,699.72 ms 7.81 GB |
3 | 258,601.58 ms 7.57 GB |
221,123.44 ms 7.77 GB |
303,444.17 ms 7.64 GB |
254,998.79 ms 7.44 GB |
It's clear that when GraalVM CE is run with -XX:-UseJVMCINativeLibrary -XX:-UseJVMCICompiler
, i.e., uses C1/C2 as the JIT compiler for the native-image
Java process and graal.jar only as the backend for generating the native binary, it behaves more similar to Mandrel. We still see it performing slightly better, but as the measurements were done on a laptop I don't really trust them to make conclusions with only 3 runs.
This behavior is reproducible with prior Mandrel versions, so maybe we should consider whether Mandrel should start using libgraal as well.
... so maybe we should consider whether Mandrel should start using libgraal as well.
Using libgraal in Mandrel for Java 11 seems a no-go as libgraal depends heavily on JVMCI changes that are not available in OpenJDK 11. Moving to Java 17 we could reconsider this.
Yes, I've reproduced this. ~20% slower build times.
On the flip side I've seen libgraal
making a difference in memory consumption in constrained environments. Running with -XX:-UseJVMCINativeLibrary -XX:-UseJVMCICompiler
does no longer get the build killed (in a container with limits).
Graal VM CE 21.2 (via quay.io/quarkus/ubi-quarkus-native-image:21.2-java11
):
$ sudo podman run --cpu-quota=300000 --cpu-period=100000 --memory=4G --memory-swap=4G --userns=keep-id --user=$(id -u) --entrypoint /bin/bash -v $(pwd):/project:z 253554a3d3e0 -c 'bash run_build.sh'
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] classlist: 18,946.87 ms, 1.68 GB 1.25 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (cap): 490.97 ms, 1.68 GB 1.27 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] setup: 2,306.70 ms, 1.68 GB 1.28 GB (VmRSS)
The bundle named: messages, has not been found. If the bundle is part of a module, verify the bundle name is a fully qualified class name. Otherwise verify the bundle path is accessible in the classpath.
17:03:32,458 INFO [org.hib.Version] HHH000412: Hibernate ORM core version 5.5.7.Final
17:03:32,536 INFO [org.hib.ann.com.Version] HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
17:03:32,815 INFO [org.hib.dia.Dialect] HHH000400: Using dialect: io.quarkus.hibernate.orm.runtime.dialect.QuarkusPostgreSQL10Dialect
17:04:41,837 INFO [org.jbo.threads] JBoss Threads version 3.4.2.Final
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (clinit): 1,303.60 ms, 2.87 GB 3.79 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (typeflow): 100,846.47 ms, 2.87 GB 3.80 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (objects): 70,522.13 ms, 2.87 GB 3.80 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (features): 1,378.58 ms, 2.87 GB 3.80 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] analysis: 179,092.17 ms, 2.87 GB 3.80 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] universe: 6,512.90 ms, 2.88 GB 3.79 GB (VmRSS)
Error: Image build request failed with exit status 137
$ sudo podman images | grep 253554a3d3e0
quay.io/quarkus/ubi-quarkus-native-image 21.2-java11 253554a3d3e0 2 weeks ago 1.25 GB
Graal VM CE 21.2 with -XX:-UseJVMCINativeLibrary -XX:-UseJVMCICompiler
:
$ sudo podman run --cpu-quota=300000 --cpu-period=100000 --memory=4G --memory-swap=4G --userns=keep-id --user=$(id -u) --entrypoint /bin/bash -v $(pwd):/project:z 253554a3d3e0 -c 'bash run_build.sh'
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] classlist: 15,552.22 ms, 1.69 GB 0.97 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (cap): 436.54 ms, 1.69 GB 1.00 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] setup: 2,429.24 ms, 1.69 GB 1.01 GB (VmRSS)
The bundle named: messages, has not been found. If the bundle is part of a module, verify the bundle name is a fully qualified class name. Otherwise verify the bundle path is accessible in the classpath.
17:18:04,546 INFO [org.hib.Version] HHH000412: Hibernate ORM core version 5.5.7.Final
17:18:04,641 INFO [org.hib.ann.com.Version] HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
17:18:04,739 INFO [org.hib.dia.Dialect] HHH000400: Using dialect: io.quarkus.hibernate.orm.runtime.dialect.QuarkusPostgreSQL10Dialect
17:19:14,645 INFO [org.jbo.threads] JBoss Threads version 3.4.2.Final
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (clinit): 1,355.11 ms, 2.90 GB 3.52 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (typeflow): 98,272.00 ms, 2.90 GB 3.52 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (objects): 84,651.15 ms, 2.90 GB 3.52 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (features): 1,488.91 ms, 2.90 GB 3.52 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] analysis: 188,123.33 ms, 2.90 GB 3.52 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] universe: 6,611.26 ms, 2.90 GB 3.52 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (parse): 14,570.44 ms, 2.85 GB 3.75 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (inline): 18,789.42 ms, 2.85 GB 3.76 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] (compile): 148,249.41 ms, 3.04 GB 3.73 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] compile: 184,890.07 ms, 3.04 GB 3.73 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] image: 10,247.45 ms, 2.90 GB 3.77 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] write: 1,514.47 ms, 2.90 GB 3.77 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:23] [total]: 409,815.01 ms, 2.90 GB 3.77 GB (VmRSS)
# Printing build artifacts to: /project/spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner.build_artifacts.txt
Mandrel 21.2 (via quay.io/quarkus/ubi-quarkus-mandrel:21.2-java11
):
$ sudo podman run --cpu-quota=300000 --cpu-period=100000 --memory=4G --memory-swap=4G --userns=keep-id --user=$(id -u) --entrypoint /bin/bash -v $(pwd):/project:z bcb7fd481b9d -c 'bash run_build.sh'
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] classlist: 15,819.63 ms, 1.19 GB 0.71 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] (cap): 691.90 ms, 1.19 GB 0.73 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] setup: 2,464.19 ms, 1.19 GB 0.74 GB (VmRSS)
The bundle named: messages, has not been found. If the bundle is part of a module, verify the bundle name is a fully qualified class name. Otherwise verify the bundle path is accessible in the classpath.
17:07:37,727 INFO [org.hib.Version] HHH000412: Hibernate ORM core version 5.5.7.Final
17:07:37,812 INFO [org.hib.ann.com.Version] HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
17:07:38,044 INFO [org.hib.dia.Dialect] HHH000400: Using dialect: io.quarkus.hibernate.orm.runtime.dialect.QuarkusPostgreSQL10Dialect
17:09:02,211 INFO [org.jbo.threads] JBoss Threads version 3.4.2.Final
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] (clinit): 1,169.83 ms, 3.02 GB 3.49 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] (typeflow): 118,958.38 ms, 3.02 GB 3.49 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] (objects): 91,161.64 ms, 3.02 GB 3.49 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] (features): 1,574.82 ms, 3.02 GB 3.49 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] analysis: 215,031.84 ms, 3.02 GB 3.49 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] universe: 7,198.35 ms, 3.02 GB 3.53 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] (parse): 23,000.72 ms, 2.84 GB 3.76 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] (inline): 21,371.34 ms, 2.83 GB 3.70 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] (compile): 214,593.69 ms, 3.04 GB 3.74 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] compile: 261,936.51 ms, 3.04 GB 3.74 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] image: 9,250.12 ms, 2.98 GB 3.78 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] write: 1,459.14 ms, 2.98 GB 3.77 GB (VmRSS)
[spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner:47] [total]: 513,565.72 ms, 2.98 GB 3.77 GB (VmRSS)
# Printing build artifacts to: /project/spring-to-quarkus-todo-0.0.1-SNAPSHOT-runner.build_artifacts.txt
For reference. run_build.sh
is a tiny script to get the VmRSS from /proc/self/status
when the image build runs driven via the native-sources feature (native-image.args
file) of quarkus:
native-image $(cat native-image.args) | while read line; do echo -n "$line"; pid=$(echo "$line" | grep '^\[' | awk '{ print $1 }' | sed 's/\[//g' | sed 's/\]//g' | cut -d':' -f2); if [ "${pid}_" != "_" ]; then grep VmRSS /proc/$pid/status | awk '{ print $2 }' | awk '{ val=$1 / 1024; val=val/1024; printf "%5.2f GB (VmRSS)\n", val }'; else echo; fi; done
Resident set size is sometimes lower than reported Heap capacity due to lazy commit of the kernel. -XX:+AlwaysPreTouch
removes that issue in reporting.
This issue appears to be stale because it has been open 30 days with no activity. This issue will be closed in 7 days unless Stale
label is removed, a new comment is made, or not-Stale
label is added.
Resident set size is sometimes lower than reported Heap capacity due to lazy commit of the kernel.
-XX:+AlwaysPreTouch
removes that issue in reporting.
Correction. Current implementation of memory reporting uses Runtime.getRuntime().totalMemory()
which amounts to the heap capacity (as opposed to heap usage). When it's being switched to heap usage, then this issue isn't present. -XX:+AlwaysPreTouch
is a work-around for making the GC operating-system-commit full heap capacity. Hence, the change in reporting.
Running similar experiments with 22.3 I see that the difference is now smaller (~6% when comparing the average execution time of 5 runs). On these 5 runs Mandrel appears to have slightly bigger Peak RSS as well (~200MB on average).
I think we can close this issue at this point.
The ~6% difference still stands for 22.3.2
Raw numbers from 10 runs of 5 configurations:
graal-default
is using the graalvmce builder image without additional optionsgraal-nolib-graal
is passing -J-XX:+UseJVMCINativeLibrary
graal-nolib-nograal
is passing -XX:-UseJVMCINativeLibrary
and -J-XX:-UseJVMCICompiler
mandrel-default
is using the mandrel builder image without additional optionsmandrel-graaljit
is passing -J-XX:+UseJVMCICompiler
configuration | total_time | total_memory | |
---|---|---|---|
0 | graal-default | 55538 | 7521435648 |
1 | graal-default | 56685 | 7219445760 |
2 | graal-default | 56161 | 7413432320 |
3 | graal-default | 55923 | 7400325120 |
4 | graal-default | 55290 | 7597981696 |
5 | graal-default | 55738 | 7566000128 |
6 | graal-default | 57507 | 7573864448 |
7 | graal-default | 55233 | 7501512704 |
8 | graal-default | 54862 | 7529824256 |
9 | graal-default | 56027 | 7438598144 |
10 | graal-nolib-graal | 60765 | 7885291520 |
11 | graal-nolib-graal | 60352 | 6984040448 |
12 | graal-nolib-graal | 59214 | 7720140800 |
13 | graal-nolib-graal | 59976 | 7832338432 |
14 | graal-nolib-graal | 58724 | 7773093888 |
15 | graal-nolib-graal | 59305 | 7204765696 |
16 | graal-nolib-graal | 59855 | 7347896320 |
17 | graal-nolib-graal | 58220 | 7459045376 |
18 | graal-nolib-graal | 58151 | 7366246400 |
19 | graal-nolib-graal | 59452 | 6779043840 |
20 | graal-nolib-nograal | 57023 | 7424442368 |
21 | graal-nolib-nograal | 58114 | 7389839360 |
22 | graal-nolib-nograal | 56685 | 7326924800 |
23 | graal-nolib-nograal | 57912 | 7593263104 |
24 | graal-nolib-nograal | 58215 | 7597981696 |
25 | graal-nolib-nograal | 57905 | 7150239744 |
26 | graal-nolib-nograal | 57399 | 7499415552 |
27 | graal-nolib-nograal | 58418 | 7571243008 |
28 | graal-nolib-nograal | 58627 | 7696547840 |
29 | graal-nolib-nograal | 59014 | 7319584768 |
30 | mandrel-default | 60614 | 6977224704 |
31 | mandrel-default | 63803 | 7253000192 |
32 | mandrel-default | 58275 | 7184318464 |
33 | mandrel-default | 59046 | 7400325120 |
34 | mandrel-default | 59313 | 7155482624 |
35 | mandrel-default | 58466 | 7033847808 |
36 | mandrel-default | 58093 | 7178027008 |
37 | mandrel-default | 58489 | 7192707072 |
38 | mandrel-default | 57934 | 7217348608 |
39 | mandrel-default | 58059 | 7178551296 |
40 | mandrel-graaljit | 57836 | 7087849472 |
41 | mandrel-graaljit | 59676 | 7119306752 |
42 | mandrel-graaljit | 58819 | 7343702016 |
43 | mandrel-graaljit | 58393 | 7308050432 |
44 | mandrel-graaljit | 58283 | 7231504384 |
45 | mandrel-graaljit | 58914 | 7267680256 |
46 | mandrel-graaljit | 58166 | 7297040384 |
47 | mandrel-graaljit | 57023 | 7320633344 |
48 | mandrel-graaljit | 58759 | 7326924800 |
49 | mandrel-graaljit | 58859 | 6896484352 |
In order to do "fair" comparisons, there might be other flags which could have some effect. Using -XX:+PrintFlagsFinal
on the jlinked graal java
and mandrel java
(which is stock Temurin JDK) I see a few differences:
$ diff -u graal_ce_javaflags.txt mandrel_javaflags.txt
--- graal_ce_javaflags.txt 2023-05-23 12:04:00.545041883 +0200
+++ mandrel_javaflags.txt 2023-05-23 12:16:28.181482629 +0200
@@ -71,7 +71,7 @@
intx CompileThreshold = 10000 {pd product} {default}
double CompileThresholdScaling = 1.000000 {product} {default}
intx CompilerThreadPriority = -1 {product} {default}
- intx CompilerThreadStackSize = 2048 {pd product} {default}
+ intx CompilerThreadStackSize = 1024 {pd product} {default}
size_t CompressedClassSpaceSize = 1073741824 {product} {default}
uint ConcGCThreads = 3 {product} {ergonomic}
intx ConditionalMoveLimit = 3 {C2 pd product} {default}
@@ -94,7 +94,6 @@
bool DumpReplayDataOnError = true {product} {default}
bool DumpSharedSpaces = false {product} {default}
bool DynamicDumpSharedSpaces = false {product} {default}
- bool EagerJVMCI = false {JVMCI product} {default}
bool EagerXrunInit = false {product} {default}
intx EliminateAllocationArraySizeLimit = 64 {C2 product} {default}
bool EliminateAllocations = true {C2 product} {default}
@@ -103,8 +102,6 @@
bool EliminateNestedLocks = true {C2 product} {default}
bool EnableContended = true {product} {default}
bool EnableDynamicAgentLoading = true {product} {default}
- bool EnableJVMCI = true {JVMCI product} {default}
- bool EnableJVMCIProduct = true {JVMCI product} {jimage}
size_t ErgoHeapSizeLimit = 0 {product} {default}
ccstr ErrorFile = {product} {default}
bool ErrorFileToStderr = false {product} {default}
@@ -178,23 +175,10 @@
uintx InitiatingHeapOccupancyPercent = 45 {product} {default}
bool Inline = true {product} {default}
ccstr InlineDataFile = {product} {default}
- intx InlineSmallCode = 1000 {C2 pd product} {default}
+ intx InlineSmallCode = 2500 {C2 pd product} {default}
bool InlineSynchronizedMethods = true {C1 product} {default}
intx InteriorEntryAlignment = 16 {C2 pd product} {default}
intx InterpreterProfilePercentage = 33 {product} {default}
- uint JVMCICompilerIdleDelay = 1000 {JVMCI product} {default}
- intx JVMCICounterSize = 0 {JVMCI product} {default}
- bool JVMCICountersExcludeCompiler = true {JVMCI product} {default}
- intx JVMCIEventLogLevel = 1 {JVMCI product} {default}
- ccstr JVMCILibDumpJNIConfig = {JVMCI product} {default}
- ccstr JVMCILibPath = {JVMCI product} {default}
- intx JVMCINMethodSizeLimit = 655360 {JVMCI product} {default}
- ccstr JVMCINativeLibraryErrorFile = {JVMCI product} {default}
- double JVMCINativeLibraryThreadFraction = 0.330000 {JVMCI product} {default}
- bool JVMCIPrintProperties = false {JVMCI product} {default}
- intx JVMCIThreads = 1 {JVMCI product} {default}
- uint JVMCIThreadsPerNativeLibraryRuntime = 1 {JVMCI product} {jimage}
- intx JVMCITraceLevel = 0 {JVMCI product} {default}
bool JavaMonitorsInStackTrace = true {product} {default}
intx JavaPriority10_To_OSPriority = -1 {product} {default}
intx JavaPriority1_To_OSPriority = -1 {product} {default}
@@ -279,8 +263,8 @@
size_t NewSizeThreadIncrease = 5320 {pd product} {default}
intx NmethodSweepActivity = 10 {product} {default}
intx NodeLimitFudgeFactor = 2000 {C2 product} {default}
- uintx NonNMethodCodeHeapSize = 6979620 {pd product} {ergonomic}
- uintx NonProfiledCodeHeapSize = 122339310 {pd product} {ergonomic}
+ uintx NonNMethodCodeHeapSize = 5839372 {pd product} {ergonomic}
+ uintx NonProfiledCodeHeapSize = 122909434 {pd product} {ergonomic}
intx NumberOfLoopInstrToAlign = 4 {C2 product} {default}
intx ObjectAlignmentInBytes = 8 {product lp64_product} {default}
size_t OldPLABSize = 1024 {product} {default}
@@ -352,7 +336,7 @@
uintx ProcessDistributionStride = 4 {product} {default}
bool ProfileInterpreter = true {pd product} {default}
intx ProfileMaturityPercentage = 20 {product} {default}
- uintx ProfiledCodeHeapSize = 122339310 {pd product} {ergonomic}
+ uintx ProfiledCodeHeapSize = 122909434 {pd product} {ergonomic}
uintx PromotedPadding = 3 {product} {default}
uintx QueuedAllocationWarningCount = 0 {product} {default}
int RTMRetryCount = 5 {ARCH product} {default}
@@ -386,6 +370,8 @@
size_t SharedBaseAddress = 34359738368 {product} {default}
ccstr SharedClassListFile = {product} {default}
uintx SharedSymbolTableBucketSize = 4 {product} {default}
+ ccstr ShenandoahGCHeuristics = adaptive {product} {default}
+ ccstr ShenandoahGCMode = satb {product} {default}
bool ShowCodeDetailsInExceptionMessages = true {manageable} {default}
bool ShowMessageBoxOnError = false {product} {default}
bool ShrinkHeapInSteps = true {product} {default}
@@ -419,7 +405,7 @@
uintx TenuredGenerationSizeIncrement = 20 {product} {default}
uintx TenuredGenerationSizeSupplement = 80 {product} {default}
uintx TenuredGenerationSizeSupplementDecay = 2 {product} {default}
- intx ThreadPriorityPolicy = 1 {product} {jimage}
+ intx ThreadPriorityPolicy = 0 {product} {default}
bool ThreadPriorityVerbose = false {product} {default}
intx ThreadStackSize = 1024 {pd product} {default}
uintx ThresholdTolerance = 10 {product} {default}
@@ -456,10 +442,10 @@
bool TrapBasedNullChecks = false {pd product} {default}
bool TrapBasedRangeChecks = false {C2 pd product} {default}
intx TypeProfileArgsLimit = 2 {product} {default}
- uintx TypeProfileLevel = 0 {pd product} {default}
+ uintx TypeProfileLevel = 111 {pd product} {default}
intx TypeProfileMajorReceiverPercent = 90 {C2 product} {default}
intx TypeProfileParmsLimit = 2 {product} {default}
- intx TypeProfileWidth = 8 {product} {default}
+ intx TypeProfileWidth = 2 {product} {default}
intx UnguardOnExecutionViolation = 0 {product} {default}
bool UseAES = true {product} {default}
intx UseAVX = 2 {ARCH product} {default}
@@ -504,8 +490,6 @@
bool UseHugeTLBFS = false {product} {default}
bool UseInlineCaches = true {product} {default}
bool UseInterpreter = true {product} {default}
- bool UseJVMCICompiler = true {JVMCI product} {default}
- bool UseJVMCINativeLibrary = true {JVMCI product} {default}
bool UseJumpTables = true {C2 product} {default}
bool UseLargePages = false {pd product} {default}
bool UseLargePagesIndividualAllocation = false {pd product} {default}
@@ -574,5 +558,5 @@
uintx ZUncommitDelay = 300 {product} {default}
bool ZeroTLAB = false {product} {default}
openjdk 17.0.7 2023-04-18
-OpenJDK Runtime Environment GraalVM CE 22.3.2 (build 17.0.7+7-jvmci-22.3-b18)
-OpenJDK 64-Bit Server VM GraalVM CE 22.3.2 (build 17.0.7+7-jvmci-22.3-b18, mixed mode, sharing)
+OpenJDK Runtime Environment Temurin-17.0.7+7 (build 17.0.7+7)
+OpenJDK 64-Bit Server VM Temurin-17.0.7+7 (build 17.0.7+7, mixed mode, sharing)
I think we can close this issue at this point.
Closing this as the difference is in the ballpark of 5% and we haven't had any complaints from users.
Description
Mandrel 21.2.0.0-Final appears to be slower than GraalVM 21.2.0 (311s vs 229s)
How To Reproduce
Steps to reproduce the behavior:
1.
2.
Configuration:
Expected behavior
Compilation times should be similar.
Additional context
The output of the first command (Mandrel) is:
The output of the second command (GraalVM) is: