Closed tgubeli closed 2 years ago
Hey @tgubeli thanks for the heads up. We've engaged the RHPDS team to help get this resolved. I'll report back here when we have a fix in place. As a workaround, you could manually increase the number of replicas
on your worker node MachineSet
after the cluster is provisioned
I just deployed a 40-user workshop using a modified user-to-worker-node ratio and there are 8 worker nodes - hopefully that should suffice! 😄 Thanks again for bringing this to our attention @tgubeli
I just deployed a 40-user workshop using a modified user-to-worker-node ratio and there are 8 worker nodes - hopefully that should suffice! 😄 Thanks again for bringing this to our attention @tgubeli
Hey Andy, no problem at all. Thank you!
I've setup this workshop for 40 participants. So 40 service mesh deployments. Openshift by default has a limit of pods per worker node (250). This limit was met very quickly (the ocp "large" cluster instance has only 2 workers nodes). That caused that no more applications could be deployed in the cluster.
You must consider the amount of pods in the deployment script and relate that with the amount of participants (the amount of pods per service mesh instances and the amount of pods that the jupyter spawner creates). Just changing the maxpods per worker node parameter to 500 (which is supported , the default value is 250 pods per worker node) or adding more worker nodes to the openshift deployment/cluster. Ref: https://docs.openshift.com/container-platform/4.10/scalability_and_performance/planning-your-environment-according-to-object-maximums.html