tlaplus / vscode-tlaplus

TLA+ language support for Visual Studio Code
MIT License
348 stars 31 forks source link

Add Wiki documentation for TLC options #297

Open CIPop opened 1 year ago

CIPop commented 1 year ago

(I don't know how to send a PR towards the Wiki portion.)

I found the following two resources useful to better utilize the resources available on the machine running my model:

Also, there is a Wiki broken link at https://github.com/tlaplus/vscode-tlaplus/wiki/Settings: "TLC Model Checker: Statistics Sharing allows to opt-in / opt-out sharing of TLC usage statistics. More details." "More details" Link should be: https://github.com/tlaplus/tlaplus/blob/master/tlatools/org.lamport.tlatools/src/util/ExecutionStatisticsCollector.md

lemmy commented 1 year ago

I fixed the broken link and added a new link to https://github.com/tlaplus/tlaplus/blob/master/general/docs/current-tools.md#command-line-options. This current-tools.md document covers various aspects, including the -workers option. However, I'm a bit hesitant to add performance tuning tips since optimizing a VM can be quite challenging. Maybe we should consider referring to official Java documentation on performance tuning instead?

CIPop commented 1 year ago

Thank you @lemmy - I didn't find the command-line options MD file in my searches earlier this week... much better than just the code comment!

Maybe we should consider referring to official Java documentation on performance tuning instead?

I spent considerable time figuring out how to use all RAM and CPUs available on my machine which was unexpected. TLC should automatically scale-up to all resources if it needs to, unless constrained through configuration. Instead of providing the options, I would prefer if the extension (or TLC) would magically use all resources on my dev box with an opt-in restriction mechanism.

In the absence of that, I would document how that can be achieved.

Also, -fpmem > 1 gives this warning which was ambiguous to me:

Using -fpmem with an abolute memory value has been deprecated. Please allocate memory for the TLC process via the JVM mechanisms and use -fpmem to set the fraction to be used for fingerprint storage.

I had to spend some time figuring out what "Please allocate memory for the TLC process via the JVM mechanisms" really means since I'm not familiar with JVM memory limits. Even more, since I was running in a HyperV VM with dynamic RAM allocation (VM starts with <1GB actually allocated and reported by htop), I originally thought that was the issue.

To make things more confusing, official Oracle JVM documents (https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/jrdocs/refman/optionX.html) point out that Java processes should not be limited by Xmx:

Note: -Xmx does not limit the total amount of memory that the JVM can use.

lemmy commented 1 year ago

I spent considerable time figuring out how to use all RAM and CPUs available on my machine which was unexpected. TLC should automatically scale-up to all resources if it needs to, unless constrained through configuration. Instead of providing the options, I would prefer if the extension (or TLC) would magically use all resources on my dev box with an opt-in restriction mechanism. In the absence of that, I would document how that can be achieved. Also, -fpmem > 1 gives this warning which was ambiguous to me: Using -fpmem with an abolute memory value has been deprecated. Please allocate memory for the TLC process via the JVM mechanisms and use -fpmem to set the fraction to be used for fingerprint storage. I had to spend some time figuring out what "Please allocate memory for the TLC process via the JVM mechanisms" really means since I'm not familiar with JVM memory limits. Even more, since I was running in a HyperV VM with dynamic RAM allocation (VM starts with <1GB actually allocated and reported by htop), I originally thought that was the issue.

@CIPop The TLA+ Toolbox implements what you described here, except it uses an opt-out mechanism instead of an opt-in one. One option is to configure a TLC run in the Toolbox and then reuse its generated VM and TLC parameters in VSCode, until someone implements better support for large-scale model checking in VSCode.

To make things more confusing, official Oracle JVM documents (https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/jrdocs/refman/optionX.html) point out that Java processes should not be limited by Xmx:

Are you sure that the JRockit documentation applies to your local VM? I believe JRockit was discontinued a while ago and partially merged into Hotspot. However, I haven't been keeping up with JVM development in recent years. It's possible that the art of VM tuning has completely changed.