recast-hep / recast-atlas

CLI for ATLAS RECAST contributors
https://recast.docs.cern.ch/
Apache License 2.0
5 stars 5 forks source link

how to assign cpu cores and memory limits to a job when running via docker as backend #102

Open LittlePawer opened 1 year ago

LittlePawer commented 1 year ago

Dear experts,

I was wondering if there is a way to assign manually the cpu cores and memory limits to the job running via docker as the backend so that I can configure the resources to my jobs better?

Many thanks!

lukasheinrich commented 1 year ago

Hi @LittlePawer

I think adding something like this in your steps.yml should work also with the recast backend, right?

  stages:
    - name: reana_demo_helloworld_memory_limit
      dependencies: [init]
      scheduler:
        scheduler_type: 'singlestep-stage'
        parameters:
          helloworld: {step: init, output: helloworld}
        step:
          process:
            process_type: 'string-interpolated-cmd'
            cmd: 'python "{helloworld}"'
          environment:
            environment_type: 'docker-encapsulated'
            image: 'python'
            imagetag: '2.7-slim'
            resources:
              - compute_backend: kubernetes
              - kubernetes_memory_limit: '8Gi'
LittlePawer commented 1 year ago

Hi @lukasheinrich ,

Thanks for the reply. but is that helps when running via reana? Or it will also help running locally with the backend docker? And how to configure the CPU resources then?