keycloak / keycloak-benchmark

Keycloak Benchmark
https://www.keycloak.org/keycloak-benchmark/
Apache License 2.0
132 stars 74 forks source link

M1 MacOS - minikube based provisioning fails #261

Open kami619 opened 2 years ago

kami619 commented 2 years ago

Describe the bug

When we start the minikube based provisioning on an M1 mac, with podman driver based VM, everything starts up but the cadvisor fails to start up properly.

Below is the messages I see when I query the cadvisor pod for logs

❯ kubectl logs -n cadvisor cadvisor-72b6z
E1113 03:47:09.315373       1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W1113 03:47:09.315541       1 machine.go:65] Cannot read vendor id correctly, set empty.
E1113 03:52:09.437671       1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W1113 03:52:09.437819       1 machine.go:65] Cannot read vendor id correctly, set empty.
E1113 03:57:09.436942       1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W1113 03:57:09.437030       1 machine.go:65] Cannot read vendor id correctly, set empty.
E1113 04:02:09.438632       1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W1113 04:02:09.438735       1 machine.go:65] Cannot read vendor id correctly, set empty.
E1113 04:07:09.437281       1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W1113 04:07:09.437450       1 machine.go:65] Cannot read vendor id correctly, set empty.
E1113 04:12:09.437069       1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W1113 04:12:09.437166       1 machine.go:65] Cannot read vendor id correctly, set empty.
E1113 04:17:09.437095       1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W1113 04:17:09.437180       1 machine.go:65] Cannot read vendor id correctly, set empty.

Version

keycloak-benchmark latest main

Expected behavior

would expect the cadvisor pod to come up with out issues

Actual behavior

All the other pods seems to be coming up without error. seems like this setup is facing an issue with ingress

❯ kubectl get pods -A -w
NAMESPACE       NAME                                                     READY   STATUS      RESTARTS       AGE
cadvisor        cadvisor-72b6z                                           1/1     Running     0              2m52s
ingress-nginx   ingress-nginx-admission-create-58g44                     0/1     Completed   0              3m33s
ingress-nginx   ingress-nginx-admission-patch-6wrvn                      0/1     Completed   1              3m33s
ingress-nginx   ingress-nginx-controller-5959f988fd-8d5lx                1/1     Running     0              3m32s
keycloak        cryostat-68f4c675d6-wmk8h                                2/3     Running     0              2m46s
keycloak        keycloak-0                                               1/1     Running     0              62s
keycloak        keycloak-operator-55c6bd5cd8-n66vb                       1/1     Running     0              2m47s
keycloak        postgres-7bf755846c-x9fsq                                1/1     Running     0              2m46s
keycloak        postgres-exporter-7f9c9dc98b-nm975                       1/1     Running     0              2m46s
keycloak        sqlpad-74cdc455d7-jv4vw                                  1/1     Running     0              2m46s
kube-system     coredns-565d847f94-gpw4k                                 1/1     Running     0              3m32s
kube-system     etcd-minikube                                            1/1     Running     0              3m44s
kube-system     kube-apiserver-minikube                                  1/1     Running     0              3m45s
kube-system     kube-controller-manager-minikube                         1/1     Running     0              3m44s
kube-system     kube-proxy-dh9n2                                         1/1     Running     0              3m32s
kube-system     kube-scheduler-minikube                                  1/1     Running     0              3m45s
kube-system     storage-provisioner                                      1/1     Running     1 (3m2s ago)   3m44s
kubebox         kubebox-698f46bdcd-dz7nv                                 1/1     Running     0              2m52s
monitoring      alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running     0              2m10s
monitoring      graphite-exporter-5686cd9d-wb78s                         1/1     Running     0              2m52s
monitoring      jaeger-654c69c4c7-x944t                                  1/1     Running     0              97s
monitoring      loki-0                                                   1/1     Running     0              2m48s
monitoring      loki-grafana-agent-operator-684b478b77-j2qjx             1/1     Running     0              2m48s
monitoring      loki-logs-rv6zh                                          2/2     Running     0              90s
monitoring      prometheus-grafana-58cc87bbfd-2l7dz                      2/2     Running     0              3m1s
monitoring      prometheus-kube-prometheus-operator-6f5798cb9c-kzl7g     1/1     Running     0              3m1s
monitoring      prometheus-kube-state-metrics-9bbb8b774-2ghx5            1/1     Running     0              3m1s
monitoring      prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running     0              2m9s
monitoring      prometheus-prometheus-node-exporter-7fjv2                1/1     Running     0              3m1s
monitoring      promtail-gksnp                                           1/1     Running     0              2m51s
keycloak        cryostat-68f4c675d6-wmk8h                                2/3     Running     0              2m52s
keycloak        cryostat-68f4c675d6-wmk8h                                3/3     Running     0              2m52s

How to Reproduce?

  1. setup the podman
  2. run podman machine init --cpus 6 --memory 16000 --rootful && podman machine start
  3. run the ./rebuild.sh with the change in DRIVER=podman instead of DRIVER=hyperkit
  4. once the VM is fully up, run the task from within the provision/minikube

Anything else?

M1 MacOS: Hardware Overview:

Model Name: MacBook Pro Model Identifier: MacBookPro18,3 Model Number: Z15G001X2LL/A Chip: Apple M1 Pro Total Number of Cores: 10 (8 performance and 2 efficiency) Memory: 32 GB System Firmware Version: 8419.60.31 OS Loader Version: 8419.60.31

System Software Overview:

System Version: macOS 13.1 (22C5033e) Kernel Version: Darwin 22.2.0 Boot Volume: Macintosh HD Boot Mode: Normal Secure Virtual Memory: Enabled System Integrity Protection: Enabled

xiaoanne commented 1 year ago

I have the same issue, the behaviour is that when following the installation guide in https://www.keycloak.org/keycloak-benchmark/kubernetes-guide/latest/installation, the installation gets stuck while executing task of running ./isup.sh file:

....
task: [keycloak] helm upgrade --install keycloak --set hostname=192.168.49.2.nip.io --set dbPoolInitialSize=5 --set dbPoolMinSize=5 --set dbPoolMaxSize=10 --set storage= --set database=postgres --set keycloakImage= keycloak

[keycloak] Release "keycloak" has been upgraded. Happy Helming!
[keycloak] NAME: keycloak
[keycloak] LAST DEPLOYED: Thu Jan  5 11:03:58 2023
[keycloak] NAMESPACE: default
[keycloak] STATUS: deployed
[keycloak] REVISION: 2
[keycloak] TEST SUITE: None
[keycloak] NOTES:
[keycloak] Keycloak will be available on https://keycloak.192.168.49.2.nip.io
[keycloak]   user: admin
[keycloak]   password: admin
[keycloak] sqlpad will be available on http://sqlpad.192.168.49.2.nip.io
[keycloak]   user: admin
[keycloak]   password: admin
[keycloak] Cryostat will be available on https://cryostat.192.168.49.2.nip.io
[keycloak] Connect to PostgreSQL on 192.168.49.2.nip.io:30009
[keycloak]   user: keycloak
[keycloak]   password: pass
[keycloak]   JDBC URL: jdbc:postgresql://192.168.49.2.nip.io:30009/keycloak
task: [keycloak] bash -c ' if [ "postgres" == "cockroach-operator" ]; then kubectl -n keycloak -o yaml get crdbclusters.crdb.cockroachlabs.com/cockroach > .task/status-keycloak-db.json; elif [ "postgres" == "cockroach-single" ]; then kubectl get deployment/cockroach -n keycloak -o=jsonpath="{.spec}" > .task/status-keycloak-db.json; elif [ "postgres" != "none" ]; then kubectl get deployment/postgres -n keycloak -o=jsonpath="{.spec}" > .task/status-keycloak-db.json; else echo "none" > .task/status-keycloak-db.json; fi'

task: [keycloak] bash -c 'kubectl get pods -A | grep -E "(BackOff|Error)" | tr -s " " | cut -d" " -f1-2 | xargs -r -L 1 kubectl delete pod -n'
task: [keycloak] bash -c 'sleep 1'
task: [keycloak] ./init-database.sh postgres
task: [keycloak] ./isup.sh

^Ctask: Signal received: "interrupt"
[keycloak] .......................................................................................................................................
task: Failed to run task "keycloak": exit status 130
.....

Therefore the installation couldn't be done with Mac M1.

ahus1 commented 1 year ago

@kami619 - there is now PR #283 which might fix it. The cause might have been cryostat that is in your log running but not ready for all pods (2/3).

Please give the PR a try when you have the time. Thanks!

ahus1 commented 1 year ago

As described in #283, the best possible option currently seems to run minikube with the qemu2 driver plus socket_vmnet.

As a prerequisite, we would need an arm image of the Keycloak operator.

Also Cryostat seems doesn't have a arm image yet AFAIK, although this could be excluded from the checks if running on arm. https://github.com/cryostatio/cryostat/issues/1329