Open michaelburch opened 6 years ago
Pods fail scheduling due to insufficient CPU:
Warning FailedScheduling 2m (x125 over 37m) default-scheduler 0/6 nodes are available: 1 PodToleratesNodeTaints, 5 Insufficient cpu.
autoscaler logs show that it thinks the pods will fit: 2018-03-13 22:17:09,634 - autoscaler.cluster - INFO - ++++ Running Scaling Loop ++++++ 2018-03-13 22:17:09,873 - autoscaler.cluster - INFO - Pods to schedule: 2 2018-03-13 22:17:09,873 - autoscaler.cluster - INFO - ++++ Scaling Up Begins ++++++ 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - Nodes: 5 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - To schedule: 2 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - KubePod(default, auditlogwriter-86d9755f76-l25rm) fits on k8s-agentpool1-36914057-0 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - KubePod(default, auditlogwriter-86d9755f76-lbngw) fits on k8s-agentpool1-36914057-1 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - Pending pods: 0 2018-03-13 22:17:09,875 - autoscaler.cluster - INFO - ++++ Scaling Up Ends ++++++
nodes clearly are lacking cpu resources: k8s-agentpool1-36914057-0 2870m 82% 3588Mi 63% k8s-agentpool1-36914057-1 3622m 103% 3742Mi 66%
all nodes are configured with "--system-reserved": "cpu=500m,memory=1.5Gi"
Pods fail scheduling due to insufficient CPU:
Warning FailedScheduling 2m (x125 over 37m) default-scheduler 0/6 nodes are available: 1 PodToleratesNodeTaints, 5 Insufficient cpu.
autoscaler logs show that it thinks the pods will fit: 2018-03-13 22:17:09,634 - autoscaler.cluster - INFO - ++++ Running Scaling Loop ++++++ 2018-03-13 22:17:09,873 - autoscaler.cluster - INFO - Pods to schedule: 2 2018-03-13 22:17:09,873 - autoscaler.cluster - INFO - ++++ Scaling Up Begins ++++++ 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - Nodes: 5 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - To schedule: 2 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - KubePod(default, auditlogwriter-86d9755f76-l25rm) fits on k8s-agentpool1-36914057-0 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - KubePod(default, auditlogwriter-86d9755f76-lbngw) fits on k8s-agentpool1-36914057-1 2018-03-13 22:17:09,874 - autoscaler.cluster - INFO - Pending pods: 0 2018-03-13 22:17:09,875 - autoscaler.cluster - INFO - ++++ Scaling Up Ends ++++++
nodes clearly are lacking cpu resources: k8s-agentpool1-36914057-0 2870m 82% 3588Mi 63% k8s-agentpool1-36914057-1 3622m 103% 3742Mi 66%
all nodes are configured with "--system-reserved": "cpu=500m,memory=1.5Gi"