Until now, runner pods could only use fixed resource requirements. We had to pick a value that was big enough to not OOM in the vast majority of cases while also not being wasteful in what was requested. This led to a few limitations:
Not all runner pods had equal compute needs (inline runs might require more or less memory, importing large libs might increase memory near the limit for some workloads, etc.)
Anything requiring a secret could not be executed inline. Note that inline (aka non-standalone) stuff should STILL be only used for lightweight tasks, these do sometimes require secrets (ex: a func which makes just one quick API call)
It wasn't possible to pursue cluster resource efficiency/reliability goals by putting runner pods on non-pre-emptible nodes or similar.
This PR adds the functionality to overcome these limitations.
Testing
Executed the test pipeline using custom resources and looked at the pod spec to confirm they were actually used
cloned a pipeline run that had done that and confirmed that the cloned one also used custom resources
In both cases I confirmed that custom compute (ex: cpu and mem) was used, as well as that secrets were properly mounted to the runner.
Closes #1095
Until now, runner pods could only use fixed resource requirements. We had to pick a value that was big enough to not OOM in the vast majority of cases while also not being wasteful in what was requested. This led to a few limitations:
This PR adds the functionality to overcome these limitations.
Testing
In both cases I confirmed that custom compute (ex: cpu and mem) was used, as well as that secrets were properly mounted to the runner.