digitalocean / DOKS

Managed Kubernetes designed for simple and cost effective container orchestration.
https://www.digitalocean.com/products/kubernetes/
Apache License 2.0
80 stars 4 forks source link

Scale node pool to zero throws HTTP 500 #16

Closed tedmiston closed 4 years ago

tedmiston commented 5 years ago
$ doctl kubernetes cluster node-pool list my-cluster
ID                                      Name                       Size           Count    Tags                                                       Nodes
xxx    my-cluster-default-pool    s-1vcpu-2gb    1        k8s,k8s:xxx,k8s:worker    [my-cluster-default-pool-bvay]

$ doctl kubernetes cluster node-pool update my-cluster my-cluster-default-pool --count=0
Error: PUT https://api.digitalocean.com/v2/kubernetes/clusters/xxx/node_pools/xxx: 500 Server Error

(Actual IDs xxx'd out.)

I'm not sure what I expected attempting to scale the default worker node pool to zero to do, but throwing an HTTP 500 feels like a bug.

Update: The same effect happens when scaling to zero for a manually created node pool as well.

Versions:

$ doctl version
doctl version 1.31.2-release
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-21T15:34:26Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
timoreimann commented 5 years ago

Hey there @tedmiston. It looks like you have only one node pool running with that cluster. For now, we require at least one node to exist for a cluster or otherwise return an error. If you were to add another node pool and try that operation scaling down to a zero count for that first pool again, things should work.

Fully agree that the HTTP 500 response is not very helpful. Thanks for pointing out, we'll get that addressed shortly and return a meaningful error message. Keeping the ticket open at least until an improvement has shipped.

timoreimann commented 4 years ago

We should ship a meaningful error code and message by now. Also, our documentation should describe the constraint of having at least one non-empty node pool.

I'll be closing the issue, but don't hesitate to post again / file a new bug report if you feel like the way we handle this could still be improved.