Describe the bug
I'm not sure if this is a bug, but we’re experiencing an issue where worker nodes are being deleted when we delete pods. After attempting to delete some pods, I noticed there wasn’t enough CPU and memory to create new ones. I checked the number of nodes using the command kubectl get node and saw that only two nodes were left. I opened a ticket with AWS support, and they responded by sending logs that showed the worker nodes were deleted by me via the agent node-fetch. This seems strange because I only deleted pods, not the worker nodes.
See an error cannot create pods because of low CPU and Memory
Check the number of nodes with kubectl and see remaining 2 worker nodes
Expected behavior
It should delete only pods not worker nodes
Screenshots
Environment (please complete the following information):
Lens Version: 2024.8.291605-latest
OS: Linux, Ubuntu
Installation method (e.g. snap or AppImage in Linux): debian linux file
Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:
Your logs go here...
Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.
your kubeconfig here
Additional context
So from above I mentioned. EC2 instance in the autoscaling group is still not terminated but kubectl just sees only 2 worker nodes
Describe the bug I'm not sure if this is a bug, but we’re experiencing an issue where worker nodes are being deleted when we delete pods. After attempting to delete some pods, I noticed there wasn’t enough CPU and memory to create new ones. I checked the number of nodes using the command kubectl get node and saw that only two nodes were left. I opened a ticket with AWS support, and they responded by sending logs that showed the worker nodes were deleted by me via the agent node-fetch. This seems strange because I only deleted pods, not the worker nodes.
I see this not sure may it possible I got the same issue or not https://github.com/lensapp/lens/security/advisories/GHSA-x8mv-qr7w-4fm9
To Reproduce Steps to reproduce the behavior:
Expected behavior It should delete only pods not worker nodes
Screenshots
Environment (please complete the following information):
Logs: When you run the application executable from command line you will see some logging output. Please paste them here:
Kubeconfig: Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.
Additional context So from above I mentioned. EC2 instance in the autoscaling group is still not terminated but kubectl just sees only 2 worker nodes