Open itays-chase opened 3 years ago
Hi @itays-chase! This is probably due to a bug on our side. We are not specifying our container requirements (compute, memory) and so the scheduler has trouble scheduling our new pod correctly. We have a work item on our side tracking this and as soon as it's fixed we'll let you know here.
Describe the bug I have a microservices environment with multiple nodes and pods. Some of the nodes are full, without or with a small amount of free resources, but some of them have some room and also I have the Kubernetes cluster autoscaler running. When I try to work in isolation mode, the proxy agent pod is attached to an already full node and have no awareness on which node can contain it. Also it does not trigger any nodes autoscale action. This is why I am getting OutOfcpu message on the pod from kubernetes and I can not work with the bridge to kubernetes feature.
Any idea about this?