Closed diegodelemos closed 3 years ago
ideally this would be something that the engine understands on a per-jobs basis, not only per-workfllow
I.e. the engine parses the wflow description and submits for each job a suitable job definition. If the engine cannot to job-lelvel granularity it could do a wflow wide setting, but if it knows some jobs are lighter than others that would be useful to exploit
ideally this would be something that the engine understands on a per-jobs basis, not only per-workfllow
I.e. the engine parses the wflow description and submits for each job a suitable job definition. If the engine cannot to job-lelvel granularity it could do a wflow wide setting, but if it knows some jobs are lighter than others that would be useful to exploit
Yep, that's exactly the direction we're taking. This issue aims at allowing the user to set something like e.g. kubernetes_memory_limit: 8Gi
per job.
In https://github.com/reanahub/reana/issues/490 we introduced cluster-wide memory limits.
The goal of this task would be to make this memory limits overridable by users, use case:
Expected behavior:
reana.yaml
to set a memory limit for jobs. This would be done similarly as HTCondor limits.TODO:
1.5Gi
reana-client
schema validationkubernetes_memory_size
values.