I'm running a single node K8s DeepOps stack on a GPU machine (4x NVIDIA RTX A6000). Meanwhile a lot of stuff is running on the system which doesn't need GPU support. My idea was to add 2 additional master nodes and some worker nodes later. I know the documentation for adding nodes: https://github.com/NVIDIA/deepops/tree/master/docs/k8s-cluster#adding-nodes
Nevertheless i'm wondering if i need to setup a HAProxy loadbalancer machine in order to access the kube-api service afterwards. Does anyone have experience with that?
I'm running a single node K8s DeepOps stack on a GPU machine (4x NVIDIA RTX A6000). Meanwhile a lot of stuff is running on the system which doesn't need GPU support. My idea was to add 2 additional master nodes and some worker nodes later. I know the documentation for adding nodes: https://github.com/NVIDIA/deepops/tree/master/docs/k8s-cluster#adding-nodes Nevertheless i'm wondering if i need to setup a HAProxy loadbalancer machine in order to access the kube-api service afterwards. Does anyone have experience with that?