Closed Jawshua closed 5 years ago
Hi @Jawshua this issue was fixed with this commit.
See if you can redeploy the CCM using the latest image. I would recommend deleting the DaemonSet from kube-system
and re-applying the manifest that can be found in terraform-linode-k8s
(the DaemonSet only)
https://github.com/linode/terraform-linode-k8s/blob/master/modules/masters/manifests/ccm-linode.yaml
Please let me know if you run into any issues with this approach.
Unfortunately this is the case. A sufficiently old version is being referenced by the Terraform module.
root@localhost:~# docker inspect -f '{{ .Created }}' linode/linode-cloud-controller-manager
2018-11-30T13:46:27.144258049Z
Working on pushing a new one now.
Done. Please try redeploying the DaemonSet and let me know if you run into any issues. In fact, you should be able to simply delete the Pods and they will be redeployed by first pulling the new image.
root@localhost:~# docker inspect -f '{{ .Created }}' linode/linode-cloud-controller-manager
2019-01-30T18:18:10.1684702Z
This is almost certainly a related problem but if a node is shutdown long enough for it to be removed from the NodeBalancer then when the node is brought back up, it is not added back to the NodeBalancer.
General:
Bug Reporting
Expected Behavior
When adding/removing nodes in a cluster, existing Nodebalancer endpoints should be updated to reflect the change.
Actual Behavior
Nodebalancers will only ever point to the nodes that were present when the k8s LoadBalancer was created.
Steps to Reproduce the Problem
LoadBalancer
type. Our specific use case is nginx-ingress (helm install stable/nginx-ingress
)Environment Specifications
g6-standard-4
nodesScreenshots, Code Blocks, and Logs
kubectl describe service nginx-ingress
spits out error events:Additional Notes
For general help or discussion, join the Kubernetes Slack team channel
#linode
. To sign up, use the Kubernetes Slack inviter.The Linode Community is a great place to get additional support.