Closed eltan-ing closed 6 months ago
Hey @eltan-ing,
Are you still running into this issue? I've not been able to recreate this issue on my own setup using viya4-iac-k8s:3.6.0
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
jarpat-k1-oss-cas-01 64m 0% 1163Mi 0%
jarpat-k1-oss-cas-02 62m 0% 1129Mi 0%
jarpat-k1-oss-cas-03 52m 0% 1290Mi 1%
jarpat-k1-oss-compute-01 66m 0% 1149Mi 0%
jarpat-k1-oss-control-plane-01 271m 13% 1788Mi 46%
jarpat-k1-oss-control-plane-02 208m 10% 1526Mi 40%
jarpat-k1-oss-control-plane-03 171m 8% 1337Mi 35%
jarpat-k1-oss-stateful-01 41m 0% 892Mi 2%
jarpat-k1-oss-stateless-01 55m 0% 896Mi 2%
jarpat-k1-oss-stateless-02 51m 0% 881Mi 2%
jarpat-k1-oss-system-01 97m 1% 1074Mi 6%
$ kubectl get all -l app.kubernetes.io/name=metrics-server -n kube-system
NAME READY STATUS RESTARTS AGE
pod/metrics-server-84b8898677-6flpb 1/1 Running 0 10m
pod/metrics-server-84b8898677-vdmzg 1/1 Running 0 10m
pod/metrics-server-84b8898677-wkhhn 1/1 Running 0 10m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/metrics-server ClusterIP 10.43.94.64 <none> 443/TCP 10m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/metrics-server 3/3 3 3 10m
NAME DESIRED CURRENT READY AGE
replicaset.apps/metrics-server-84b8898677 3 3 3 10m
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
jarpat-k1-oss-cas-01 Ready <none> 11m v1.26.7 10.12.32.239 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-cas-02 Ready <none> 10m v1.26.7 10.12.34.153 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-cas-03 Ready <none> 11m v1.26.7 10.12.38.230 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-compute-01 Ready <none> 10m v1.26.7 10.12.33.151 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-control-plane-01 Ready control-plane 12m v1.26.7 10.12.14.70 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-control-plane-02 Ready control-plane 11m v1.26.7 10.12.34.22 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-control-plane-03 Ready control-plane 11m v1.26.7 10.12.35.240 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-stateful-01 Ready <none> 11m v1.26.7 10.12.14.138 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-stateless-01 Ready <none> 11m v1.26.7 10.12.36.13 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-stateless-02 Ready <none> 10m v1.26.7 10.12.14.73 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
jarpat-k1-oss-system-01 Ready <none> 11m v1.26.7 10.12.12.242 <none> Ubuntu 22.04.3 LTS 5.15.0-89-generic containerd://1.6.20
Marking as stale/inactive. If there are further questions please open a new GitHub issue.
I have an issue with the deployment of a bare-metal cluster using oss-k8s.sh. The deployment completes successfully, but when I run 'kubectl top nodes,' I encounter the error message 'error: metrics not available yet.'
Can you help me figure out what the issue might be? This could potentially cause problems with the deployment of Pods using HPA."
: deploy