Open gbrail opened 1 month ago
Sounds to me there's not much point in all the optimization levels, if we can figure out the reason for optLevel 0 and subsequently conclude we don't need it anymore
Going forward I think feature flagging an experimental optimization and eventually making it standard is better /simpler in the long run than having different optLevels users/embedders have to choose from
Is there really often a case, when one level fail while an other passes?
What do you think, if the normal merge check only run in level 9 and maybe also only for one java version and a daily github action checks all optimization levels and java versions.
You can add build-badges to the readme.md, so that everyone can see, which builds are successful
@rPraml so you're not in favor of merging all optLevels >= 0 into a single 'compiled' mode?
My previous post was actually about how to speed up the build pipeline. Currently, 3 optimization levels are tested against 3 platforms for each commit. Which takes about 45 minutes.
My question was more, is that really necessary? For most commits¹, wouldn't it be enough to test just one or two (eg. java 17 + compiled and maybe an other jvm in interpreted) of the 9 combinations (which would then only be 6 if we agreed on 2 compile levels). This could reduce build time down to < 10 minutes and saves github actions time (don't know if you have a plan where you need to pay for GH-actions)
A daily github action would then test all JVMs with compiled and interpreted level.
¹) For example: When I change something in JavaAdapter, the chance is very low, that I break only ONE optimization level, but commits like "Begin to use invokedynamic in the bytecode" may.
@rPraml so you're not in favor of merging all optLevels >= 0 into a single 'compiled' mode?
Sorry, I expressed myself in a confusing way. In my opinion, the API only needs to provide a way to switch between interpreted (-1) and compiled mode with all optimizations (9) as you suggest. I see no reason for an end-user, why crank up the level only to e.g. 0 or 4, when i also can also use 9. I would also say that 9 should be the default if the platform allows it. (autodetect?)
The only reason I can think of is that when fixing/enhancing low-level-code, like bytecode or IIR, the developer might want to turn off certain optimization steps to make debugging easier. (But a developer would probably change some flags or just comment out the relevant code temporarily.)
Rhino supports an optimization level in the Context which goes from -1 to 9. From inspecting the code, I have seen that we use this as follows:
Proposal: Should we combine all optimization levels into one, so that we have only two modes -- interpreted and compiled?
I think that this is low-risk because the optimizations enabled at level 1 above have been in the codebase for a decade or more and seem pretty stable by now. The result would be less complexity to test, and all the tests would run 1/3 faster. Also, JavaScript engines and Java runtimes these days tend not to have a lot of choices of optimization level.
I can think of two reasons against this:
Here is some data on the effects of the current optimizations, using the SunSpider benchmark suite. You'll see the differences -- sometimes dramatic -- between the levels.
Optimization level 9:
Optimization level 0:
Optimization level 1 (interpreted mode):