Closed bernt-matthias closed 5 years ago
ping @chufz
OK - thanks for the issue.
We have experienced memory issues before and dynamically assigned the memory based on the maximum number of peaks with a single spectra within the MSP.
e.g.
<destination id="destination_metfrag" runner="dynamic">
<param id="type">python</param>
<param id="function">metfrag_msp_peak_count</param>
</destination>
@RJMW when you get a chance could you copy and paste the python function we ended up using please
OK. This is also an option.
What I would suggest is to add
<stdio>
<regex match="Cannot allocate memory"
source="stderr"
level="fatal_oom"
description="Out of memory error occurred" />
</stdio>
then Galaxy can resubmit to a destination with more memory.
This is now implemented.
For reference, if a dynamic runner is to be assigned the function needs to be added to galaxy/lib/galaxy/jobs/rule/destinations_input_type.py
e.g. a function called metfrag_msp_peak_count needs to be in `destinations_input_type.py that checks for msp peak count and then assigns memory.
At our local bham galaxy instance we currently check for precursor m/z size to assign levels of memory (However, I think there is probably a better predictor of memory usage that the precursor m/z or number of peaks)
Wonderful. Just installed the released version on our local Galaxy (ping @chufz).
When submitting with to less memory I get:
in
stderr
.Maybe detect
Cannot allocate memory
as out of memory error, then Galaxy can react on this.