FNNDSC / pman

A process management system written in python
MIT License
22 stars 33 forks source link

Configurable tolerations #194

Open jennydaman opened 2 years ago

jennydaman commented 2 years ago

GPU nodes are tainted with PreferNoSchedule. They can still be scheduled to if a pod specs its containers with resoureces.limits['nvidia.com/gpu'] = 1, but it would be better if pman can be configured to conditionally set tolerations on jobs.

https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

p.s. Per-compute env config is starting to get unwieldy such as with swarm and kubernetes, how can we manage this more concisely?

qxprakash commented 2 years ago

@jennydaman so you want pman to set toleration on pods instead of adding them in pod-definition.yml .. I hope I am getting this correct..

jennydaman commented 2 years ago

To rephrase my question in the p.s.:

pman is supposed to be a common interface over kubernetes, swarm, SLURM, ... so scheduler-specific configuration is antithetical to its intention.

jennydaman commented 2 years ago

@Prakashh21 what/where is pod-definition.yml?

qxprakash commented 2 years ago

I used pod-definition.yml just as a reference name , yes what I meant was scheduler specific configuration in setting tolerations , here we want pman to set tolerations on pods right?

qxprakash commented 2 years ago
jennydaman commented 2 years ago

I still don't understand what you mean by pod-definition.yml but moving on...

Closely related issue: being able to configure pman with a set of affinity labels. Using tolerations and affinities, we can deploy multiple pman instances which correspond to different configurations, e.g. one pman will prefer low-CPU, high-memory, another pman will prefer high CPU, high memory, ...

*GPU-intensive does not necessarily mean graphically intensive, e.g. machine learning

qxprakash commented 2 years ago

@jennydaman pod-definition.yml is the configuration/manifests/specification of the pods which are to be scheduled on the cluster nodes , tolerations are set on pods defined in their manifests that is what I was saying , and pod-defination.yml was the example name of the manifests , I hope I was clear..

qxprakash commented 2 years ago

I still don't understand what you mean by pod-definition.yml but moving on...

  • Q1) yes*
  • Q2) yes
  • Q3) I think so?

Closely related issue: being able to configure pman with a set of affinity labels. Using tolerations and affinities, we can deploy multiple pman instances which correspond to different configurations, e.g. one pman will prefer low-CPU, high-memory, another pman will prefer high CPU, high memory, ...

*GPU-intensive does not necessarily mean graphically intensive, e.g. machine learning

so what you're saying is , we'll have multiple instances of pman , each would prefer to schedule pods on different set of nodes (catering to different types of work loads) through set tolerations and affinities , this sounds cool , but tell me this if we have multiple instances of pman then how would pfcon know to which pman instance it should send the job description...? will this be defined in the job description it self ... or....... ?