Open spitfiredd opened 1 year ago
I am seeing a similar behavior with cromwell. I give a task 64GB. In AWS batch, I see the following warning next to the Memory information
Configuration conflict
This value was submitted using containerOverrides.memory which has been deprecated and was not used as an override. Instead, the MEMORY value found in the job definition’s resourceRequirements key was used instead. More information about the deprecated key can be found in the AWS Batch API documentation.
I see an "Essential container in task exited". However, when I click on the job definition. It appears to have 8GB allocated memory. Is there a different way to specify memory?
Thanks for reporting this issue. Is this an issue with the 1.5.2 release as well?
It is still an issue with v 1.5.2 (cromwell)
@spitfiredd The child processes are spawned with a default of 1vCPU and 1024 MEMORY. If tasks need more memory or CPU then you would typically make these requests as process directives for CPU and memory. (https://www.nextflow.io/docs/latest/process.html#cpus) and (https://www.nextflow.io/docs/latest/process.html#memory).
@biofilos AGC is currently using an older version of Cromwell. This older version uses the deprecated call to AWS Batch, hence the error. In our next release we will update the version of Cromwell used.
As a possible work around, you might consider deploying a miniwdl
context to run the WDL.
Describe the Bug
Worker processes not spawning with enough memory or scaling; therefore Nexflow will error with exit status 137 (not enough memory)
Steps to Reproduce
Child processes are spawing with
1vCPU and 1024 MEMORY
Relevant Logs
Main Process
Child Process
Expected Behavior
spawn processes with enough memory or scale.
Actual Behavior
Container ran out of memory
Screenshots
Additional Context
ran workflow with the following command:
agc workflow run foo --context dev
Operating System: Linux AGC Version: 1.5.1 Was AGC setup with a custom bucket: no Was AGC setup with a custom VPC: no