IGV uses Java, default JVM heap memory is 2 GB. For IGV jobs, the memory allocation must be communicated to JVM. We also can't provide 100% of the allocation to the heap without risking job termination for running out of memory. We have to leave some free for the JVM itself. If the allocation is more than 5GB, 512 MB is reserved for JVM, rest goes to heap. If less than 5GB, 90% goes to the heap. These are assumed-good values and have not been tested against IGV, so 512 MB may not be large enough to be robust, so we may see future errors.
Currently the computation is done in igv.sh following modification of the script. The process for modification is not automated and so is prone to regression. We received an issue from a user about this following the most recent update of IGV. The immediate issue has been resolved, but not the root cause.
This pull request pulls the memory computation up into the OOD job creation script, and passes the appropriate JVM flag to igv.sh, which should pass it to the JVM. This should make OOD IGV robust against updates to IGV.
There is some follow-on work to be done:
We may need to revert the modified igv.sh scripts in each version of IGV available in lmod. It isn't clear which flag would "win" or if both would be passed to the JVM. If both are passed, it isn't clear how the JVM will handle it.
The script needs to be tested in a dev cluster, and it doesn't make sense for me to set this up. I have independently verified that the computation works as expected, but NOT that IGV behaves as expected.
IGV uses Java, default JVM heap memory is 2 GB. For IGV jobs, the memory allocation must be communicated to JVM. We also can't provide 100% of the allocation to the heap without risking job termination for running out of memory. We have to leave some free for the JVM itself. If the allocation is more than 5GB, 512 MB is reserved for JVM, rest goes to heap. If less than 5GB, 90% goes to the heap. These are assumed-good values and have not been tested against IGV, so 512 MB may not be large enough to be robust, so we may see future errors.
See here for a more complete explanation: https://docs.uabgrid.uab.edu/wiki/Java#Xmx
Currently the computation is done in
igv.sh
following modification of the script. The process for modification is not automated and so is prone to regression. We received an issue from a user about this following the most recent update of IGV. The immediate issue has been resolved, but not the root cause.This pull request pulls the memory computation up into the OOD job creation script, and passes the appropriate JVM flag to
igv.sh
, which should pass it to the JVM. This should make OOD IGV robust against updates to IGV.There is some follow-on work to be done:
igv.sh
scripts in each version of IGV available in lmod. It isn't clear which flag would "win" or if both would be passed to the JVM. If both are passed, it isn't clear how the JVM will handle it.