Closed ldx closed 4 years ago
Kip has a loose binding of pods to nodes. A node that was created for one pod can be used by a different pod if the original pod goes away. This is probably a bit more complex than it needs to be but can be helpful in a very dynamic environment.
BindingNodeScaler.podMatchesNode
needs to be updated to match on GPU spec in addition to instanceType.
https://github.com/elotl/kip/blob/bcox-debian-image/pkg/server/nodemanager/node_scaler.go#L68-L75. I think that the function only needs to check that the resources.GPU spec strings are equal.
Kip has a loose binding of pods to nodes. A node that was created for one pod can be used by a different pod if the original pod goes away. This is probably a bit more complex than it needs to be but can be helpful in a very dynamic environment.
BindingNodeScaler.podMatchesNode
needs to be updated to match on GPU spec in addition to instanceType.https://github.com/elotl/kip/blob/bcox-debian-image/pkg/server/nodemanager/node_scaler.go#L68-L75. I think that the function only needs to check that the resources.GPU spec strings are equal.
Thanks, I added this.
I added a new field to the instance type data in the instance selector,
supportedGPUTypes
, that holds a map of available GPU types and the maximum number of them that can be attached to the instance. Example:I also made the script generating instance data update the
gpu
field, though users are expected to specify a GPU type on GCE.I changed the GPU field in the resource spec to optionally hold the GPU type, for example "2 nvidia-tesla-k80". Specifying just the number of GPUs is still valid, and will result in a default GPU type attached to the instance.
When converting a k8s pod to a kip pod, I added some code to take into account the node selector, which is the current way in k8s to select a node with a particular GPU type. We handle the GKE-style node selector, e.g.
cloud.google.com/gke-accelerator: nvidia-tesla-v100
, but this is a bit limited, since, as a virtual node, kip can support multiple different GPU types. So I also added support for another node selector label, where the GPU type is encoded in the node label key. This way a) the operator on the same virtual node can expose any number of GPU types they would like to support via kip and b) the user can also select a particular type. Example:node.elotl.co/gpu-tesla-k80
. Note: using the GKE style label will automatically start an nvidia-driver daemonset on GKE, which we don't need.I added the necessary logic to the GCE backend to attach the required number of GPUs to the instance.
I changed the provider config file, so the operator can expose GPUs via adding extra node labels, and specify the number of GPUs they would like to enable. Example:
kubelet: capacity: cpu: "100" memory: "512Gi" pods: "200" nvidia.com/gpu: "100" labels: node.elotl.co/gpu-nvidia-tesla-p4: "" node.elotl.co/gpu-nvidia-tesla-t4: "" node.elotl.co/gpu-nvidia-tesla-k80: "" node.elotl.co/gpu-nvidia-tesla-p100: "" node.elotl.co/gpu-nvidia-tesla-v100: ""
This also means that to configure capacity, the user will have to use
kubelet.capacity
instead ofkubelet
.