Closed sschober closed 1 year ago
I just deactivated the "new virtualisation framework" option and retried, with the same result.
I'm not sure if k8s.gcr.io/echoserver:1.10 has any arm64 variant, it looks like amd64 only ?
Thanks for the hint! Might this be worth a note in the docs? Would have saved me some time. :) I could do that as well, if this would be welcome-
I could do that as well, if this would be welcome-
That would be much welcome. I'm not sure if there is an alternative image available, but a note in the docs would be a start!
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/echoserver 1.10 365ec60129c5 3 years ago 95.4MB
k8s.gcr.io/echoserver 1.4 a90209bb39e3 4 years ago 140MB
https://minikube.sigs.k8s.io/docs/start/ https://kubernetes.io/docs/tutorials/hello-minikube/
It seems to be based on the "nginx" image, so it should be possible to make a new version of it that also supports arm64
.
$ docker run -it --entrypoint "" k8s.gcr.io/echoserver:1.4 nginx -V
nginx version: nginx/1.10.0
cat /etc/os-release PRETTY_NAME="Ubuntu 16.04 LTS"
$ docker run -it --entrypoint "" k8s.gcr.io/echoserver:1.10 nginx -V
nginx version: nginx/1.13.3
cat /etc/os-release PRETTY_NAME="Ubuntu 16.04.2 LTS"
I found it, it lived in k8s "ingress-nginx" - but it died in 2018.
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/echoserver 1.10 365ec60129c5 3 years ago 95.4MB
Then it got bumped to 2.1, but nobody bothered updating docs.
https://github.com/kubernetes/ingress-nginx/commit/77b922aa00ae0a731b8b73bd141022b374c6eb49 | https://github.com/kubernetes/kubernetes/commit/a2d94d9a3f50c47f2a000f54941c4a83b291c1b4
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/kubernetes-e2e-test-images/echoserver 2.2 4081d9a83108 2 years ago 21.7MB
gcr.io/kubernetes-e2e-test-images/echoserver 2.1 76232f30d3c7 2 years ago 21.7MB
2.1 nginx version: nginx/1.12.2 PRETTY_NAME="Alpine Linux v3.7"
https://github.com/kubernetes/kubernetes/commit/5e84dfbbc54aed940ea42d003b67a37cd185d690 "Multiple same headers got wrong result"
2.2 nginx version: nginx/1.12.2 PRETTY_NAME="Alpine Linux v3.7"
kubectl create deployment hello-minikube --image=gcr.io/kubernetes-e2e-test-images/echoserver:2.2
Hey thank you for all this information! I am trying to digest what this means for me currently...
I tried deploying echoserver:2.2
, but only got the following result:
Generating self-signed cert
Generating a 2048 bit RSA private key
..............................................+++
......+++
writing new private key to '/certs/privateKey.key'
-----
Starting nginx
PANIC: unprotected error in call to Lua API (bad light userdata pointer)
echoserver:2.1
yields the same.
I currently clone the Kubernetes repo and try to build the echoserver image on my machine... and yes this seems to be successful:
git clone <kubernetes-repo>
cd kubernetes/test/images/echoserver
vim Dockerfile # get rid of all that cross-build stuff I do not know :)
docker build .
docker tag <image id> echoserver:v2.2
minikube load echoserver:v2.2
kubectl create deployment hello-minikube --image=echoserver:v2.2
# success :)
This way, I can now continue with the getting started guide. :)
I can add a note to the docs if you like, with a reference to this issue maybe?
My bad, I should have tested the image rather than assuming that it would work just because it is used in the k8s test.
It worked OK on amd64
, but not on arm64
...
Crashes the same way on Linux. Apparently they "forgot" to release a version of the echoserver, with arm64 support...
https://github.com/kubernetes/ingress-nginx/issues/2802
ubuntu@ubuntu:~$ docker run gcr.io/kubernetes-e2e-test-images/echoserver:2.2
Generating self-signed cert
Generating a 2048 bit RSA private key
...............................+++
.............................................................................+++
writing new private key to '/certs/privateKey.key'
-----
Starting nginx
PANIC: unprotected error in call to Lua API (bad light userdata pointer)
Like you say, minikube doesn't need the LuaJIT support anyway.
But upgrading nginx to something less ancient, would fix it too...
The weird thing is that the git repository says nginx 1.15, but image says 1.12 ? Anything newer than 2018 would work, the nginx-ingress-controller used 1.15.6.
+1, I've spent about 2 days running hello-minikube. I couldn't start hello-minikube on apple MacBook Pro with M1 chip(MacOS 11.4 BigSur (20F71)). Have always the same situation:
hello-minikube-6ddfcc9757-5x8bp 0/1 CrashLoopBackOff
@awarus Have you solved this problem?
@awarus Have you solved this problem?
No, as far I understand hello-minikube image is not available now for M1
What we're going to end up doing here is creating an image that support multiple archs and using that instead of echoserver for all these demos and guides.
Hey thank you for all this information! I am trying to digest what this means for me currently...
I tried deploying
echoserver:2.2
, but only got the following result:Generating self-signed cert Generating a 2048 bit RSA private key ..............................................+++ ......+++ writing new private key to '/certs/privateKey.key' ----- Starting nginx PANIC: unprotected error in call to Lua API (bad light userdata pointer)
echoserver:2.1
yields the same.I currently clone the Kubernetes repo and try to build the echoserver image on my machine... and yes this seems to be successful:
git clone <kubernetes-repo> cd kubernetes/test/images/echoserver vim Dockerfile # get rid of all that cross-build stuff I do not know :) docker build . docker tag <image id> echoserver:v2.2 minikube load echoserver:v2.2 kubectl create deployment hello-minikube --image=echoserver:v2.2 # success :)
This way, I can now continue with the getting started guide. :)
I can add a note to the docs if you like, with a reference to this issue maybe?
Hi, can you share the Dockerfile you changed with us?
@awarus Have you solved this problem?
No, as far I understand hello-minikube image is not available now for M1
e2eteam/echoserver:2.2-linux-arm64 worked for me on m1
Thanks for finding that image @zhyyu! This issue is open to anyone that wants to take it.
/assign
For anyone else still struggling, just use a node echo server polyverse/node-echo-server
@ollydixon Thanks! It works for me! @zhyyu Sorry, it doesn't work for me, m1/Monterey system.
@awarus Have you solved this problem?
No, as far I understand hello-minikube image is not available now for M1
e2eteam/echoserver:2.2-linux-arm64 worked for me on m1
This does not work for me on m1. Yes. the pod seems healthy on the dashboard. However, I got the below error when running 'k logs hello-minikube-bc84b9485-x6p5h'
2021/12/31 00:49:26 [alert] 10#10: detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html) nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html) 2021/12/31 00:49:26 [error] 10#10: lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found: no field package.preload['resty.core'] no file './resty/core.lua' no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua' no file '/usr/local/share/lua/5.1/resty/core.lua' no file '/usr/local/share/lua/5.1/resty/core/init.lua' no file '/usr/share/lua/5.1/resty/core.lua' no file '/usr/share/lua/5.1/resty/core/init.lua' no file '/usr/share/lua/common/resty/core.lua' no file '/usr/share/lua/common/resty/core/init.lua' no file './resty/core.so' no file '/usr/local/lib/lua/5.1/resty/core.so' no file '/usr/lib/lua/5.1/resty/core.so' no file '/usr/local/lib/lua/5.1/loadall.so' no file './resty.so' no file '/usr/local/lib/lua/5.1/resty.so' no file '/usr/lib/lua/5.1/resty.so' no file '/usr/local/lib/lua/5.1/loadall.so')
For anyone else still struggling, just use a node echo server
polyverse/node-echo-server
This works as of today. Thanks ollydixon.
I am trying the following on my M1 macbook and getting the following error: kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4 kubectl expose deployment hello-minikube --type=NodePort --port=8080
minikube service hello-minikube (facing below error while executing this command)
----------- | ---------------- | ------------- | --------------------------- | NAMESPACE | NAME | TARGET PORT | URL | |
---|---|---|---|---|---|---|---|---|
default | hello-minikube | 9080 | http://192.168.49.2:30635 | |||||
----------- | ---------------- | ------------- | --------------------------- |
π Starting tunnel for service hello-minikube. | ----------- | ---------------- | ------------- | ------------------------ | NAMESPACE | NAME | TARGET PORT | URL | |
---|---|---|---|---|---|---|---|---|---|
default | hello-minikube | http://127.0.0.1:54812 | |||||||
----------- | ---------------- | ------------- | ------------------------ |
π Opening service default/hello-minikube in default browser... β Because you are using a Docker driver on darwin, the terminal needs to be open to run it. I0213 18:59:29.932197 78927 out.go:176] β Stopping tunnel for service hello-minikube. I0213 18:59:29.951144 78927 out.go:176] W0213 18:59:29.951392 78927 out.go:241] β Exiting due to SVC_TUNNEL_STOP: stopping ssh tunnel: os: process already finished
I am just getting introduced to Kubernetes. Kindly suggest.
Hi @ysheikh245, as commented above there are a couple different images that work for M1.
https://github.com/kubernetes/minikube/issues/11107#issuecomment-919742761 https://github.com/kubernetes/minikube/issues/11107#issuecomment-981830947
None of the image cited above worked for me so I created a small, similar image myself which aims to test getting deployed on minikube.
Here's how to use it:
kubectl create deployment hello-minikube --image=preslavmihaylov/kubehelloworld:latest
kubectl expose deployment hello-minikube --type=NodePort --port=3000
Then, expose it on localhost & go to http://localhost:3000
:
kubectl port-forward service/hello-minikube 3000:3000
Tested on Macbook Air M1
@preslavmihaylov Thanks.
@preslavmihaylov thanks!
I have following service:
$ kubectl -n traefik get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ggtips ClusterIP 10.96.41.7 <none> 80/TCP 33m
traefik-ingress-service NodePort 10.110.178.12 <none> 80:32194/TCP,8080:31621/TCP 33m
whoami ClusterIP 10.103.52.177 <none> 80/TCP 33m
$ minikube ip
192.168.49.2
Can anyone explain why it is available only after port forwarding at http://localhost:8081 and http://localhost:8080:
kubectl -n traefik port-forward service/traefik-ingress-service 8081:80 8080:8080
instead of 127.0.0.1:32194, 127.0.0.1:31621 or 192.168.49.2:32194, 192.168.49.2:31621 ?
And why minikube
does not return any URL?
$ minikube --namespace=traefik service traefik-ingress-service --url
π Starting tunnel for service traefik-ingress-service.
β Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
@Tazovsky I believe this is because the addresses you see in there are part of minikube's internal network and you have to expose them for this to work locally.
When using AWS or similar as backend, usually, you'll get a hostname you can connect to directly
kubectl port-forward service/hello-minikube 3000:3000
Thank you so much! It works in my M1 macbook.
@awarus Have you solved this problem?
No, as far I understand hello-minikube image is not available now for M1
e2eteam/echoserver:2.2-linux-arm64 worked for me on m1
This is showing all green but when running minikube service hello-minikube
it fails with
panic: runtime error: index out of range [0] with length 0
goroutine 1 [running]:
k8s.io/minikube/cmd/minikube/cmd.startKicServiceTunnel({0x14000213450, 0x1, 0x1}, {0x0, 0x0, 0x0}, {0x1045f0cf3, 0x8}, {0x14000d87b1a, 0x6})
/app/cmd/minikube/cmd/service.go:205 +0x340
k8s.io/minikube/cmd/minikube/cmd.glob..func35(0x106109aa0, {0x14000213450, 0x1, 0x1})
/app/cmd/minikube/cmd/service.go:143 +0x594
github.com/spf13/cobra.(*Command).execute(0x106109aa0, {0x14000213420, 0x1, 0x1})
/go/pkg/mod/github.com/spf13/cobra@v1.3.0/command.go:860 +0x640
github.com/spf13/cobra.(*Command).ExecuteC(0x1061095a0)
/go/pkg/mod/github.com/spf13/cobra@v1.3.0/command.go:974 +0x410
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/cobra@v1.3.0/command.go:902
k8s.io/minikube/cmd/minikube/cmd.Execute()
/app/cmd/minikube/cmd/root.go:157 +0xdb8
main.main()
/app/cmd/minikube/main.go:86 +0x2a0
Hi @sschober, did the comments above help resolve this issue in your case?
We see that @drinkbeer was able to get things working as well.
I ended up forking echoserver as https://github.com/silasb/echoserver and a ARM64 docker image is located here https://hub.docker.com/r/silasb/echoserver
+1 to the request for a note in the docs that this image might not work for everyone. I only happened to recognize the error log of exec user process caused: exec format error
as being related to architecture issues from previous experience, we shouldn't assume newbies would be able to infer that.
(FWIW, I found that echoserver-arm:1.8
worked fine on my Raspberry Pi 4 64-bit)
If anyone is interested in creating their own pod and service to use with the tutorial on Mac M1/ARM64, this comment might be helpful. Let me know if any questions.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I ended up forking echoserver as https://github.com/silasb/echoserver and a ARM64 docker image is located here https://hub.docker.com/r/silasb/echoserver
Thank you for pushing an image that works for the architecture. Would love to see this repo added to the k8s docks/registry
We've pushed an image kicbase/echo-server:1.0
that works on both amd64 & arm64, will make a PR shortly to update the documentation.
Trying out minikube on my Mac mini apple silicon/m1. But step 4 from minikube start is not working for me.
I am using Docker Desktop 3.3.1 (63152). I have activated the experimental feature: "Use new virtualisation framework"
Please let me know if I can provide further information!
Steps to reproduce the issue:
Full output of failed command:
Full output of
minikube start
command used, if not already included:π minikube v1.19.0 auf Darwin 11.2.3 (arm64) β¨ Automatically selected the docker driver π Starting control plane node minikube in cluster minikube π Pulling base image ... \ > gcr.io/k8s-minikube/kicbase...: 357.67 MiB / 357.67 MiB 100.00% 1.82 MiB π₯ Creating docker container (CPUs=2, Memory=4000MB) ...
π Verifying Kubernetes components... βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 π Enabled addons: storage-provisioner, default-storageclass π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Optional: Full output of
minikube logs
command: minikube.log