madhuakula / kubernetes-goat

Kubernetes Goat is a "Vulnerable by Design" cluster environment to learn and practice Kubernetes security using an interactive hands-on playground 🚀
https://madhuakula.com/kubernetes-goat
MIT License
4.27k stars 702 forks source link

pod(system-monitor-deployment) CrashLoopBackOff #66

Closed marchanbbb closed 1 year ago

marchanbbb commented 2 years ago

pod(system-monitor-deployment) can't run, the error log: runtime: failed to create new OS thread (have 2 already; errno=22) fatal error: newosproc

runtime stack: runtime.throw(0x37b713, 0x9) /usr/local/go/src/runtime/panic.go:566 +0x78 runtime.newosproc(0x10824000, 0x10833fe0) /usr/local/go/src/runtime/os_linux.go:160 +0x1b0 runtime.newm(0x3b3394, 0x0) /usr/local/go/src/runtime/proc.go:1572 +0x12c runtime.main.func1() /usr/local/go/src/runtime/proc.go:126 +0x24 runtime.systemstack(0x4e6f00) /usr/local/go/src/runtime/asm_arm.s:247 +0x80 runtime.mstart() /usr/local/go/src/runtime/proc.go:1079

goroutine 1 [running]: runtime.systemstack_switch() /usr/local/go/src/runtime/asm_arm.s:192 +0x4 fp=0x1081e7ac sp=0x1081e7a8 runtime.main() /usr/local/go/src/runtime/proc.go:127 +0x5c fp=0x1081e7d4 sp=0x1081e7ac runtime.goexit() /usr/local/go/src/runtime/asm_arm.s:998 +0x4 fp=0x1081e7d4 sp=0x1081e7d4

madhuakula commented 2 years ago

Sorry for missing this. @marchanbbb could you please provide some info on the underlying infra/environment setup?

marchanbbb commented 2 years ago

macbook m1 pro chip GoVersion:"go1.17.6" K8S:"v1.23.3" in minikube

madhuakula commented 2 years ago

Thanks for the info. Ah! looks like some issue with the underlying infra. I haven't tested yet, but let me double check once again.

Could you please provide the image it downloaded and the info

kubectl describe pod <podname>
kubectl get pod <podname> -o yaml

Just, wanted to ensure the architecture and the mount issues.

josephwhenry commented 2 years ago

@madhuakula I'm having a similar issue with the internal-proxy pod.

% kubectl describe pod internal-proxy-deployment-567dc6dcb5-bq457
Name:             internal-proxy-deployment-567dc6dcb5-bq457
Namespace:        default
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2
Start Time:       Wed, 07 Sep 2022 15:09:43 -0500
Labels:           app=internal-proxy
                  pod-template-hash=567dc6dcb5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
  IP:           10.244.0.8
Controlled By:  ReplicaSet/internal-proxy-deployment-567dc6dcb5
Containers:
  internal-api:
    Container ID:   docker://dffcbfdd2ce823e3c40e4082a417455c44daab3bd6ed18972600bcada8f395dc
    Image:          madhuakula/k8s-goat-internal-api
    Image ID:       docker-pullable://madhuakula/k8s-goat-internal-api@sha256:e9ae791e8e418d693e603155085ee9fc92d56621c9b933f2875dc57961d96db9
    Port:           3000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 07 Sep 2022 15:15:48 -0500
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Wed, 07 Sep 2022 15:13:59 -0500
      Finished:     Wed, 07 Sep 2022 15:14:56 -0500
    Ready:          True
    Restart Count:  4
    Limits:
      cpu:     30m
      memory:  40Mi
    Requests:
      cpu:        30m
      memory:     40Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vg6gx (ro)
  info-app:
    Container ID:   docker://65d53e7db117af2287061d96f31edfc98b662f5d3da7697d096f597c54e33044
    Image:          madhuakula/k8s-goat-info-app
    Image ID:       docker-pullable://madhuakula/k8s-goat-info-app@sha256:dffd135e766d689ff6055f996e65df4d19f2962f9af8b6b52349ba3b469a8c25
    Port:           5000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 07 Sep 2022 15:10:49 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     30m
      memory:  40Mi
    Requests:
      cpu:        30m
      memory:     40Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vg6gx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-vg6gx:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  6m36s                  default-scheduler  Successfully assigned default/internal-proxy-deployment-567dc6dcb5-bq457 to minikube
  Normal   Pulled     5m50s                  kubelet            Successfully pulled image "madhuakula/k8s-goat-internal-api" in 43.91241604s
  Normal   Pulling    5m50s                  kubelet            Pulling image "madhuakula/k8s-goat-info-app"
  Normal   Pulled     5m31s                  kubelet            Successfully pulled image "madhuakula/k8s-goat-info-app" in 19.303302184s
  Normal   Started    5m30s                  kubelet            Started container info-app
  Normal   Created    5m30s                  kubelet            Created container info-app
  Normal   Pulled     4m53s                  kubelet            Successfully pulled image "madhuakula/k8s-goat-internal-api" in 620.053214ms
  Normal   Pulled     3m42s                  kubelet            Successfully pulled image "madhuakula/k8s-goat-internal-api" in 638.128011ms
  Normal   Pulling    2m21s (x4 over 6m34s)  kubelet            Pulling image "madhuakula/k8s-goat-internal-api"
  Normal   Started    2m20s (x4 over 5m50s)  kubelet            Started container internal-api
  Normal   Created    2m20s (x4 over 5m50s)  kubelet            Created container internal-api
  Normal   Pulled     2m20s                  kubelet            Successfully pulled image "madhuakula/k8s-goat-internal-api" in 626.234188ms
  Warning  BackOff    58s (x6 over 3m56s)    kubelet            Back-off restarting failed container

This was working fine yesterday, and I updated to Monterey 12.5.1 last night, so it's probably related. Unfortunately, I'm a kubernoob and don't know why beyond that.

BTW, my hardware specs:

MacBook Pro (16-inch, 2019)
Processor: 2.6 GHz 6-core Intel Core i7
Memory 64 GB 2667 MHz DDR4
josephwhenry commented 2 years ago

I added additional memory in scenarios/internal-proxy/deployment.yaml, and it seems to work now.

madhuakula commented 2 years ago

Thanks for the confirmation @josephwhenry. @marchanbbb could you please confirm if this can help you fix?

If that works, I can make an update to Deployment and push the changes.

wo1f1ow commented 2 years ago

@josephwhenry I have the same issue. How much memory and to where should I add? I can't get that scenario (Container escape to the host system) to work because of this.

josephwhenry commented 2 years ago

@wo1f1ow In scenarios/internal-proxy/deployment.yaml, I set the memory fields to 100Mi for the internal-api and info-app containers (previously it was 40Mi). Then I rebooted the whole cluster and it worked fine.

marchanbbb commented 2 years ago

it does not work.

marchanbbb commented 2 years ago

kubernetes-goat % kubectl get pod system-monitor-deployment-7b9574bf95-lwj7f -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2022-09-19T09:17:43Z" generateName: system-monitor-deployment-7b9574bf95- labels: app: system-monitor pod-template-hash: 7b9574bf95 name: system-monitor-deployment-7b9574bf95-lwj7f namespace: default ownerReferences:

wo1f1ow commented 2 years ago

By adding memory I managed to solve the internal-proxy pod issue but system-monitor still fails:


kubectl describe pod system-monitor-deployment-584ddbfc4d-j2sp9 Name: system-monitor-deployment-584ddbfc4d-j2sp9 Namespace: default Priority: 0 Service Account: default Node: minikube/192.168.49.2 Start Time: Wed, 28 Sep 2022 14:02:00 +0200 Labels: app=system-monitor pod-template-hash=584ddbfc4d Annotations: kubectl.kubernetes.io/restartedAt: 2022-09-28T14:01:38+02:00 Status: Running IP: 192.168.49.2 IPs: IP: 192.168.49.2 Controlled By: ReplicaSet/system-monitor-deployment-584ddbfc4d Containers: system-monitor: Container ID: docker://237171770eaea083fe606e1c628741f2e7c3212b7077dad411c66000f5a1f4d7 Image: madhuakula/k8s-goat-system-monitor Image ID: docker-pullable://madhuakula/k8s-goat-system-monitor@sha256:06b58bd080201ea0d4048befdd2159f384b61ce457a5a96e3001db629b5caa40 Port: 8080/TCP Host Port: 8080/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 2 Started: Wed, 28 Sep 2022 14:53:10 +0200 Finished: Wed, 28 Sep 2022 14:53:11 +0200 Ready: False Restart Count: 17 Limits: cpu: 20m memory: 50Mi Requests: cpu: 20m memory: 50Mi Environment: K8S_GOAT_VAULT_KEY: <set to the key 'k8sgoatvaultkey' in secret 'goatvault'> Optional: false Mounts: /host-system from host-filesystem (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jjx8d (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: host-filesystem: Type: HostPath (bare host directory volume) Path: / HostPathType:
kube-api-access-jjx8d: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Guaranteed Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Normal Scheduled 52m default-scheduler Successfully assigned default/system-monitor-deployment-584ddbfc4d-j2sp9 to minikube Warning FailedScheduling 52m default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. Normal Pulled 51m kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 24.360870678s Normal Pulled 51m kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1.658600168s Normal Created 51m (x3 over 51m) kubelet Created container system-monitor Normal Pulled 51m kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1.68063071s Normal Started 51m (x3 over 51m) kubelet Started container system-monitor Warning BackOff 50m (x4 over 51m) kubelet Back-off restarting failed container Normal Pulling 50m (x4 over 52m) kubelet Pulling image "madhuakula/k8s-goat-system-monitor" Normal Pulled 50m kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1.752681209s Normal Pulled 47m kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1m59.819545221s Normal Pulled 45m kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1m41.638316088s Normal Pulling 44m (x4 over 50m) kubelet Pulling image "madhuakula/k8s-goat-system-monitor" Normal Started 43m (x4 over 50m) kubelet Started container system-monitor Normal Created 43m (x4 over 50m) kubelet Created container system-monitor Normal Pulled 43m kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1m46.755828006s Normal Pulled 18m kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1m53.868322885s Warning BackOff 9m54s (x134 over 50m) kubelet Back-off restarting failed container Normal Pulled 7m35s kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 3.506930169s Normal Pulled 5m26s kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1m50.509048508s Normal Pulled 3m18s kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1m37.570606628s Normal Pulling 2m35s (x4 over 7m38s) kubelet Pulling image "madhuakula/k8s-goat-system-monitor" Normal Created 53s (x4 over 7m35s) kubelet Created container system-monitor Normal Pulled 53s kubelet Successfully pulled image "madhuakula/k8s-goat-system-monitor" in 1m41.65322213s Normal Started 52s (x4 over 7m33s) kubelet Started container system-monitor Warning BackOff 12s (x10 over 7m31s) kubelet Back-off restarting failed container


linbil commented 2 years ago

Any update on that? Exact same issue here.

ravenium commented 1 year ago

I dug a bit and it looks like the version of gotty used (1.01) has a bad arm binary, or at least it won't run (tried a simple container run on my M1 mac).

I didn't want to get too crazy but I noticed 2.0alpha3 works well enough, so I switched up the dockerfile and made a PR: https://github.com/madhuakula/kubernetes-goat/pull/83

If you want to try it locally,

m0riiiii commented 1 year ago

I had the same problem. I also ran it on my M1 Mac and got the error. I think the error is because the gotty binary system-monitor getting is 32bit and the M1 chip does not support 32bit.

I tried to use new gotty binary from https://github.com/sorenisanerd/gotty/releases/download/v1.5.0/gotty_v1.5.0_linux_arm64.tar.gz and it worked well. The gotty repository system-monitor using seems to be stopped updating about 5 years ago, and the sorenisanerd's repository seems to be a repository that has been forked and is still being maintained.

madhuakula commented 1 year ago

The resources issue fixed now with #94, also I believe with #83 you should get fixed for gotty. But I will check with new gotty as this is old and unmaintained project.

madhuakula commented 1 year ago

Updated the fixes for memory resources, it should work now.

Thank you!