Serverless computing platform with process-based lightweight function execution and container-based application isolation. Works in Knative and bare metal/VM environments.
For each workflow deployment, its allowance for GPU support should also be available for configuration at workflow deployment time, to enable dynamic definition of workflow requirements to run on GPUs instead of CPUs at workflow deployment time, and for KNIX to enable scheduling of the workflow on a node which still has sufficient GPUs cores available, like so:
Currently, the resource limits for KNIX components, when using helm charts for deployments, are fixed at deployment time, like so:
For each workflow deployment, its allowance for GPU support should also be available for configuration at workflow deployment time, to enable dynamic definition of workflow requirements to run on GPUs instead of CPUs at workflow deployment time, and for KNIX to enable scheduling of the workflow on a node which still has sufficient GPUs cores available, like so: