kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
8k stars 3.94k forks source link

Failed to scale up: Could not compute total resources: No node info #6579

Closed amrap030 closed 1 month ago

amrap030 commented 7 months ago

Which component are you using?: cluster-autoscaler

What version of the component are you using?: Juju charmed Kubernetes Autoscaler

Component version: Revision 33 (latest/stable) uses latest Helm version inside

What k8s version are you using (kubectl version)?: v1.28.7

kubectl version Output
$ kubectl version

Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.7

What environment is this in?: Openstack

What did you expect to happen?: I would expect that the autoscaler can successfully scale up and scale down nodes.

What happened instead?: I am getting the error Failed to scale up: Could not compute total resources: No node info for: juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

2024-02-28T14:41:35.595Z [pebble] HTTP API server listening on ":38813".
2024-02-28T14:41:35.596Z [pebble] Started daemon.
2024-02-28T14:41:42.401Z [pebble] GET /v1/files?action=list&path=%2F&pattern=cluster-autoscaler%2A 667.285µs 200
2024-02-28T14:41:42.414Z [pebble] POST /v1/layers 874.577µs 200
2024-02-28T14:41:42.426Z [pebble] POST /v1/files 8.386538ms 200
2024-02-28T14:41:43.614Z [pebble] POST /v1/services 11.782016ms 202
2024-02-28T14:41:43.621Z [pebble] Service "juju-autoscaler" starting: /cluster-autoscaler --namespace my-autoscaler --cloud-provider=juju --cloud-config=/config/cloud-config.yaml --nodes 1:3:16b47904-b5c7-4b6e-86a3-1aa6f6714dad:kubernetes-worker
2024-02-28T14:41:43.739Z [juju-autoscaler] I0228 14:41:43.737997      14 leaderelection.go:248] attempting to acquire leader lease my-autoscaler/cluster-autoscaler...
2024-02-28T14:41:43.755Z [juju-autoscaler] I0228 14:41:43.755230      14 leaderelection.go:258] successfully acquired lease my-autoscaler/cluster-autoscaler
2024-02-28T14:41:43.789Z [juju-autoscaler] W0228 14:41:43.789000      14 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:43.789Z [juju-autoscaler] E0228 14:41:43.789164      14 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:43.940Z [juju-autoscaler] W0228 14:41:43.940051      14 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:43.940Z [juju-autoscaler] E0228 14:41:43.940354      14 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:43.983Z [juju-autoscaler] I0228 14:41:43.983627      14 juju_manager.go:42] creating manager
2024-02-28T14:41:44.006Z [juju-autoscaler] I0228 14:41:44.005989      14 node_instances_cache.go:156] Start refreshing cloud provider node instances cache
2024-02-28T14:41:44.006Z [juju-autoscaler] I0228 14:41:44.006017      14 node_instances_cache.go:168] Refresh cloud provider node instances cache finished, refresh took 3.35µs
2024-02-28T14:41:44.634Z [pebble] GET /v1/changes/1/wait?timeout=4.000s 1.018728047s 200
2024-02-28T14:41:44.642Z [pebble] POST /v1/services 6.448403ms 202
2024-02-28T14:41:44.770Z [pebble] Service "juju-autoscaler" stopped
2024-02-28T14:41:44.782Z [pebble] Service "juju-autoscaler" starting: /cluster-autoscaler --namespace my-autoscaler --cloud-provider=juju --cloud-config=/config/cloud-config.yaml --nodes 1:3:16b47904-b5c7-4b6e-86a3-1aa6f6714dad:kubernetes-worker
2024-02-28T14:41:44.875Z [juju-autoscaler] I0228 14:41:44.875553      21 leaderelection.go:248] attempting to acquire leader lease my-autoscaler/cluster-autoscaler...
2024-02-28T14:41:44.889Z [juju-autoscaler] I0228 14:41:44.889331      21 leaderelection.go:258] successfully acquired lease my-autoscaler/cluster-autoscaler
2024-02-28T14:41:44.916Z [juju-autoscaler] W0228 14:41:44.916475      21 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:44.916Z [juju-autoscaler] E0228 14:41:44.916596      21 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:45.017Z [juju-autoscaler] W0228 14:41:45.016992      21 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:45.017Z [juju-autoscaler] E0228 14:41:45.017373      21 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:45.045Z [juju-autoscaler] I0228 14:41:45.045184      21 juju_manager.go:42] creating manager
2024-02-28T14:41:45.065Z [juju-autoscaler] I0228 14:41:45.065509      21 node_instances_cache.go:156] Start refreshing cloud provider node instances cache
2024-02-28T14:41:45.065Z [juju-autoscaler] I0228 14:41:45.065542      21 node_instances_cache.go:168] Refresh cloud provider node instances cache finished, refresh took 2.713µs
2024-02-28T14:41:45.794Z [pebble] GET /v1/changes/2/wait?timeout=4.000s 1.151287335s 200
2024-02-28T14:41:46.095Z [juju-autoscaler] W0228 14:41:46.095532      21 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:46.095Z [juju-autoscaler] E0228 14:41:46.095574      21 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:46.494Z [juju-autoscaler] W0228 14:41:46.494699      21 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:46.494Z [juju-autoscaler] E0228 14:41:46.494741      21 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:47.019Z [pebble] GET /v1/files?action=list&path=%2F&pattern=cluster-autoscaler%2A 333.327µs 200
2024-02-28T14:41:47.029Z [pebble] POST /v1/layers 226.612µs 200
2024-02-28T14:41:47.045Z [pebble] POST /v1/files 6.08591ms 200
2024-02-28T14:41:48.299Z [pebble] POST /v1/services 8.647082ms 202
2024-02-28T14:41:48.313Z [pebble] GET /v1/changes/3/wait?timeout=4.000s 12.406274ms 200
2024-02-28T14:41:48.322Z [pebble] POST /v1/services 6.065554ms 202
2024-02-28T14:41:48.344Z [pebble] Service "juju-autoscaler" stopped
2024-02-28T14:41:48.357Z [pebble] Service "juju-autoscaler" starting: /cluster-autoscaler --namespace my-autoscaler --cloud-provider=juju --cloud-config=/config/cloud-config.yaml --nodes 1:3:16b47904-b5c7-4b6e-86a3-1aa6f6714dad:kubernetes-worker
2024-02-28T14:41:48.426Z [juju-autoscaler] I0228 14:41:48.426438      28 leaderelection.go:248] attempting to acquire leader lease my-autoscaler/cluster-autoscaler...
2024-02-28T14:41:48.439Z [juju-autoscaler] I0228 14:41:48.439354      28 leaderelection.go:258] successfully acquired lease my-autoscaler/cluster-autoscaler
2024-02-28T14:41:48.489Z [juju-autoscaler] W0228 14:41:48.485709      28 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:48.495Z [juju-autoscaler] E0228 14:41:48.495649      28 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:48.572Z [juju-autoscaler] W0228 14:41:48.569205      28 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:48.572Z [juju-autoscaler] E0228 14:41:48.569248      28 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:48.584Z [juju-autoscaler] I0228 14:41:48.584359      28 juju_manager.go:42] creating manager
2024-02-28T14:41:48.615Z [juju-autoscaler] I0228 14:41:48.614729      28 node_instances_cache.go:156] Start refreshing cloud provider node instances cache
2024-02-28T14:41:48.615Z [juju-autoscaler] I0228 14:41:48.614812      28 node_instances_cache.go:168] Refresh cloud provider node instances cache finished, refresh took 5.421µs
2024-02-28T14:41:49.371Z [pebble] GET /v1/changes/4/wait?timeout=4.000s 1.047874313s 200
2024-02-28T14:41:49.855Z [juju-autoscaler] W0228 14:41:49.854994      28 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:49.855Z [juju-autoscaler] E0228 14:41:49.855133      28 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:50.077Z [juju-autoscaler] W0228 14:41:50.077620      28 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:50.077Z [juju-autoscaler] E0228 14:41:50.077873      28 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:52.060Z [juju-autoscaler] W0228 14:41:52.060652      28 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:52.060Z [juju-autoscaler] E0228 14:41:52.060816      28 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:52.288Z [pebble] GET /v1/files?action=list&path=%2F&pattern=cluster-autoscaler%2A 1.85472ms 200
2024-02-28T14:41:52.298Z [pebble] POST /v1/layers 282.012µs 200
2024-02-28T14:41:52.309Z [pebble] POST /v1/files 6.492836ms 200
2024-02-28T14:41:52.502Z [juju-autoscaler] W0228 14:41:52.502187      28 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:52.502Z [juju-autoscaler] E0228 14:41:52.502223      28 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:53.592Z [pebble] POST /v1/services 7.436407ms 202
2024-02-28T14:41:53.618Z [pebble] GET /v1/changes/5/wait?timeout=4.000s 23.043136ms 200
2024-02-28T14:41:53.628Z [pebble] POST /v1/services 6.774242ms 202
2024-02-28T14:41:53.649Z [pebble] Service "juju-autoscaler" stopped
2024-02-28T14:41:53.671Z [pebble] Service "juju-autoscaler" starting: /cluster-autoscaler --namespace my-autoscaler --cloud-provider=juju --cloud-config=/config/cloud-config.yaml --nodes 1:3:16b47904-b5c7-4b6e-86a3-1aa6f6714dad:kubernetes-worker
2024-02-28T14:41:53.762Z [juju-autoscaler] I0228 14:41:53.762463      35 leaderelection.go:248] attempting to acquire leader lease my-autoscaler/cluster-autoscaler...
2024-02-28T14:41:53.776Z [juju-autoscaler] I0228 14:41:53.776150      35 leaderelection.go:258] successfully acquired lease my-autoscaler/cluster-autoscaler
2024-02-28T14:41:53.819Z [juju-autoscaler] W0228 14:41:53.819540      35 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:53.819Z [juju-autoscaler] E0228 14:41:53.819636      35 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:53.894Z [juju-autoscaler] W0228 14:41:53.890060      35 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:53.894Z [juju-autoscaler] E0228 14:41:53.890154      35 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:53.949Z [juju-autoscaler] I0228 14:41:53.949844      35 juju_manager.go:42] creating manager
2024-02-28T14:41:53.978Z [juju-autoscaler] I0228 14:41:53.978869      35 node_instances_cache.go:156] Start refreshing cloud provider node instances cache
2024-02-28T14:41:53.978Z [juju-autoscaler] I0228 14:41:53.978906      35 node_instances_cache.go:168] Refresh cloud provider node instances cache finished, refresh took 4.276µs
2024-02-28T14:41:54.679Z [pebble] GET /v1/changes/6/wait?timeout=4.000s 1.048653015s 200
2024-02-28T14:41:54.888Z [juju-autoscaler] W0228 14:41:54.888912      35 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:54.889Z [juju-autoscaler] E0228 14:41:54.889100      35 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:55.056Z [juju-autoscaler] W0228 14:41:55.056833      35 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:55.056Z [juju-autoscaler] E0228 14:41:55.056870      35 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:57.569Z [juju-autoscaler] W0228 14:41:57.569552      35 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:57.569Z [juju-autoscaler] E0228 14:41:57.569600      35 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:41:57.819Z [juju-autoscaler] W0228 14:41:57.819600      35 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:41:57.819Z [juju-autoscaler] E0228 14:41:57.819803      35 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:42:01.979Z [juju-autoscaler] W0228 14:42:01.979400      35 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:42:01.979Z [juju-autoscaler] E0228 14:42:01.979454      35 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:42:02.158Z [juju-autoscaler] W0228 14:42:02.158331      35 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:42:02.158Z [juju-autoscaler] E0228 14:42:02.158369      35 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:42:03.979Z [juju-autoscaler] I0228 14:42:03.979406      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T14:42:03.979Z [juju-autoscaler] I0228 14:42:03.979450      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T14:42:04.005Z [juju-autoscaler] W0228 14:42:04.004947      35 clusterstate.go:432] AcceptableRanges have not been populated yet. Skip checking
2024-02-28T14:42:10.581Z [juju-autoscaler] W0228 14:42:10.581142      35 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:42:10.581Z [juju-autoscaler] E0228 14:42:10.581196      35 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T14:42:13.970Z [juju-autoscaler] W0228 14:42:13.970051      35 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:42:13.970Z [juju-autoscaler] E0228 14:42:13.970112      35 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T14:42:14.021Z [juju-autoscaler] I0228 14:42:14.021505      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T14:42:14.021Z [juju-autoscaler] I0228 14:42:14.021546      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
.
.
.
.
.
.
2024-02-28T15:18:33.880Z [juju-autoscaler] I0228 15:18:33.880437      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T15:18:33.880Z [juju-autoscaler] I0228 15:18:33.880671      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T15:18:43.928Z [juju-autoscaler] I0228 15:18:43.928448      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T15:18:43.928Z [juju-autoscaler] I0228 15:18:43.928652      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T15:18:51.411Z [juju-autoscaler] W0228 15:18:51.411814      35 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T15:18:51.412Z [juju-autoscaler] E0228 15:18:51.412111      35 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T15:18:53.986Z [juju-autoscaler] I0228 15:18:53.986070      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T15:18:53.986Z [juju-autoscaler] I0228 15:18:53.986107      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T15:18:55.677Z [juju-autoscaler] W0228 15:18:55.677332      35 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T15:18:55.677Z [juju-autoscaler] E0228 15:18:55.677563      35 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T15:19:04.041Z [juju-autoscaler] I0228 15:19:04.041628      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T15:19:04.041Z [juju-autoscaler] I0228 15:19:04.041887      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T15:19:14.089Z [juju-autoscaler] I0228 15:19:14.089877      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T15:19:14.089Z [juju-autoscaler] I0228 15:19:14.089903      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T15:19:24.126Z [juju-autoscaler] I0228 15:19:24.126170      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T15:19:24.126Z [juju-autoscaler] I0228 15:19:24.126200      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T15:19:32.717Z [juju-autoscaler] W0228 15:19:32.717796      35 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T15:19:32.717Z [juju-autoscaler] E0228 15:19:32.717859      35 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
2024-02-28T15:19:35.301Z [juju-autoscaler] I0228 15:19:35.301335      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T15:19:35.301Z [juju-autoscaler] I0228 15:19:35.301360      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T15:19:35.341Z [juju-autoscaler] E0228 15:19:35.340987      35 static_autoscaler.go:459] Failed to scale up: Could not compute total resources: No node info for: juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker
2024-02-28T15:19:35.707Z [juju-autoscaler] W0228 15:19:35.707047      35 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T15:19:35.707Z [juju-autoscaler] E0228 15:19:35.707091      35 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
2024-02-28T15:19:45.861Z [juju-autoscaler] I0228 15:19:45.861466      35 juju_cloud_provider.go:156] refreshing node groups
2024-02-28T15:19:45.861Z [juju-autoscaler] I0228 15:19:45.861500      35 juju_cloud_provider.go:161] updating node group juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker target
2024-02-28T15:19:45.917Z [juju-autoscaler] E0228 15:19:45.917046      35 static_autoscaler.go:459] Failed to scale up: Could not compute total resources: No node info for: juju-16b47904-b5c7-4b6e-86a3-1aa6f6714dad-kubernetes-worker
k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/autoscaler/issues/6579#issuecomment-2295321629): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.