Closed andreariba closed 1 year ago
did you run minikube tunnel? did you run the ingress.yaml? , did you edit the /etc/hosts file?
In the terminal do the following:
I already run and checked that all the files are correctly set, I exactly followed the video and the output of minikube tunnel is different from the one in the course, as I posted in my first comment
Status:
machine: minikube
pid: 238475
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: []
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
/etc/hosts includes the rabbitmq-manager.com
I also tried "http://rabbitmq-manager.com/#/", and your port-forwarding command "kubectl port-forward rabbitmq-0 15672:15672" but I got "This site can't be reached", so I do not have access to any prompt to login.
Since I'm working on Ubuntu, I suspect it is some system settings of minikube but since I'm trying to learn I have not a clear idea ... I was hoping in some suggestions
My guess is that it is an issue with your ingress but there is not enough information here for me to tell. Please provide as much details as possible. Please provide evidence of what you've already tried. For example, you say you tried the suggestion that @donwany gave but provided no evidence to show that the attempts were unsuccessful therefore it is difficult to help.
127.0.0.1 localhost
127.0.1.1 andrea-ThinkPad-S430
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
127.0.0.1 mp3converter.com
127.0.0.1 rabbitmq-manager.com
# End of section
$ sudo minikube start --driver=docker
π minikube v1.28.0 on Ubuntu 22.04
β¨ Using the docker driver based on user configuration
π The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
π‘ If you are running minikube within a VM, consider using --driver=none:
π https://minikube.sigs.k8s.io/docs/reference/drivers/none/
β Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
I can start it without issues without sudo
$ minikube start π minikube v1.28.0 on Ubuntu 22.04 β¨ Using the docker driver based on existing profile π Starting control plane node minikube in cluster minikube π Pulling base image ... π Restarting existing docker container for "minikube" ... π³ Preparing Kubernetes v1.25.3 on Docker 20.10.20 ... π Verifying Kubernetes components... βͺ Using image docker.io/kubernetesui/dashboard:v2.7.0 βͺ Using image k8s.gcr.io/metrics-server/metrics-server:v0.6.1 βͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 βͺ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8 βͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 βͺ Using image k8s.gcr.io/ingress-nginx/controller:v1.2.1 π Verifying ingress addon... π‘ Some dashboard features require the metrics-server addon. To enable all features please run:
minikube addons enable metrics-server
π Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard, ingress π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
3. this is the output of kubectl get all:
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/auth-74d4fd787c-6kfrc 1/1 Running 5 9d pod/auth-74d4fd787c-hnjr8 1/1 Running 6 (7m3s ago) 9d pod/rabbitmq-0 1/1 Running 3 34h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/auth ClusterIP 10.105.73.133
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/auth 2/2 2 2 9d deployment.apps/gateway 0/0 0 0 7d3h
NAME DESIRED CURRENT READY AGE replicaset.apps/auth-74d4fd787c 2 2 2 9d replicaset.apps/auth-7ccbc789cd 0 0 0 9d replicaset.apps/gateway-b65f59f8d 0 0 0 7d3h
NAME READY AGE statefulset.apps/rabbitmq 1/1 34h
4. the deployment of the gateway-ingress tells me unchanged
$ kubectl apply -f gateway/manifests/gateway-ingress.yaml ingress.networking.k8s.io/gateway-ingress unchanged
Is this information useful? I'm totally new to kubernetes, so if you need some more specific information let me know
Thank you very much for your help
Hey bro, thanks for providing the additional details.
I cannot run minikube start with sudo because of an error, I can force it apparently but it does not look like I should do it
I don't understand why you want to run it with sudo
. It is clearly saying that The "docker" driver should not be used with root privileges.
(for good reason). I suggest looking at the man page for sudo
to understand what it's doing instead of using sudo
for anything and everything. Far too many videos out there defaulting to the usage of sudo
for every command teaching beginners bad habits.
Also, just remove the --driver
flag and let it auto-detect just to remove fluff while debugging. I don't think you need it unless you were having trouble with it auto-detecting another driver on your system.
Is this information useful? I'm totally new to kubernetes, so if you need some more specific information let me know Thank you very much for your help
Ok I'm not noticing anything crazy so far. But, you aren't tunneling the ingress in your above explanation. If you don't tunnel the ingress, it simply won't work.. So maybe that is your problem? Look for the part in the video where I run minikube tunnel
. I explain that the ingress won't work unless you do so (and you have to leave that tunnel e.g. open another tab or run it as a background process while you do other things). Also, you can't forget to include the ingress addon with minikube. I think the command is minikube addons enable ingress
. I suggest going over that whole section of the video again and don't skip anything. Let me know how it goes.
I tried with sudo because of the comment from @donwany, the ingress is already enabled as it is shown in my previous comment. The only difference I see from the video is that the output of minikube tunnel is different, below:
Status:
machine: minikube
pid: 69723
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: []
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Please show me your ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbitmq-ingress
spec:
rules:
- host: rabbitmq-manager.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rabbitmq
port:
number: 15672
can you show me the service also
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
type: ClusterIP
selector:
app: rabbitmq
ports:
- name: http
protocol: TCP
port: 15672
targetPort: 15672
- name: amqp
protocol: TCP
port: 5672
targetPort: 5672
please try to delete and recreate the resources and then run the tunnel. When you run the tunnel there should be some sort of output about which ingress will be created e.g.:
Although it will look different on ubuntu, it should still give some indication. Also, I know that you said you are already doing this, but DO NOT CLOSE the tunnel while trying to access the management console. The tunnel needs to be running.
unfortunately, it does not change anything
(base) andrea@andrea-ThinkPad-S430:~/MEGA/Microservices/system_design/python/src/rabbit$ kubectl delete -f ./manifests/
configmap "rabbitmq-configmap" deleted
ingress.networking.k8s.io "rabbitmq-ingress" deleted
persistentvolumeclaim "rabbitmq-pvc" deleted
secret "rabbitmq-secret" deleted
service "rabbitmq" deleted
statefulset.apps "rabbitmq" deleted
(base) andrea@andrea-ThinkPad-S430:~/MEGA/Microservices/system_design/python/src/rabbit$ kubectl apply -f ./manifests/
configmap/rabbitmq-configmap created
ingress.networking.k8s.io/rabbitmq-ingress created
persistentvolumeclaim/rabbitmq-pvc created
secret/rabbitmq-secret created
service/rabbitmq created
statefulset.apps/rabbitmq created
(base) andrea@andrea-ThinkPad-S430:~/MEGA/Microservices/system_design/python/src/rabbit$ minikube tunnel
[sudo] password for andrea:
Status:
machine: minikube
pid: 127277
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: []
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Can you show me the output of minikube addons enable ingress
Also try running:
kubectl get pods -n ingress-nginx
The output of the above should have a pod with a name that starts with ingress-nginx-controller
and the status
should be Running
.
Take that pod name and run:
kubectl describe -n ingress-nginx pod name-of-pod-from-above
and post the output here please.
After that run this:
kubectl port-forward rabbitmq-0 15672:15672
your output should look like this:
Leave that running and then go to: http://127.0.0.1:15672/ and tell me if you see this page
(base) andrea@andrea-ThinkPad-S430:~$ minikube addons enable ingress
π‘ ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
βͺ Using image k8s.gcr.io/ingress-nginx/controller:v1.2.1
βͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
βͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
π Verifying ingress addon...
π The 'ingress' addon is enabled
(base) andrea@andrea-ThinkPad-S430:~$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-p67q2 0/1 Completed 0 8d
ingress-nginx-admission-patch-42lgw 0/1 Completed 1 8d
ingress-nginx-controller-5959f988fd-xm78j 1/1 Running 10 8d
(base) andrea@andrea-ThinkPad-S430:~$ kubectl describe -n ingress-nginx pod ingress-nginx-controller-5959f988fd-xm78j
Name: ingress-nginx-controller-5959f988fd-xm78j
Namespace: ingress-nginx
Priority: 0
Service Account: ingress-nginx
Node: minikube/192.168.49.2
Start Time: Wed, 23 Nov 2022 17:20:37 +0100
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
gcp-auth-skip-secret=true
pod-template-hash=5959f988fd
Annotations: <none>
Status: Running
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ReplicaSet/ingress-nginx-controller-5959f988fd
Containers:
controller:
Container ID: docker://24e9c029fbb6d047b98e73cf02a5afefea7546cca9695999519e76e14c8533f1
Image: k8s.gcr.io/ingress-nginx/controller:v1.2.1@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8
Image ID: docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 80/TCP, 443/TCP, 0/TCP
Args:
/nginx-ingress-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--watch-ingress-without-class=true
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--udp-services-configmap=$(POD_NAMESPACE)/udp-services
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Thu, 01 Dec 2022 21:51:21 +0100
Ready: True
Restart Count: 10
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-5959f988fd-xm78j (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bg5tj (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-bg5tj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
minikube.k8s.io/primary=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 6h9m kubelet Pod sandbox changed, it will be killed and re-created.
Warning Unhealthy 6h7m (x3 over 6h8m) kubelet Liveness probe failed: Get "http://172.17.0.6:10254/healthz": dial tcp 172.17.0.6:10254: connect: connection refused
Warning RELOAD 6h7m nginx-ingress-controller Error reloading NGINX: exit status 1
2022/12/01 14:46:06 [warn] 34#34: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
2022/12/01 14:46:06 [warn] 34#34: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
2022/12/01 14:46:06 [warn] 34#34: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
2022/12/01 14:46:06 [notice] 34#34: signal process started
2022/12/01 14:46:06 [error] 34#34: invalid PID number "" in "/tmp/nginx/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx/nginx.pid"
Warning RELOAD 6h7m nginx-ingress-controller Error reloading NGINX: exit status 1
2022/12/01 14:46:06 [warn] 32#32: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
2022/12/01 14:46:06 [warn] 32#32: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
2022/12/01 14:46:06 [warn] 32#32: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
2022/12/01 14:46:06 [notice] 32#32: signal process started
2022/12/01 14:46:06 [error] 32#32: invalid PID number "" in "/tmp/nginx/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx/nginx.pid"
Normal RELOAD 6h7m nginx-ingress-controller NGINX reload triggered due to a change in configuration
Warning Unhealthy 6h7m (x2 over 6h7m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 6h7m (x3 over 6h7m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Normal Killing 6h7m kubelet Container controller failed liveness probe, will be restarted
Normal Pulled 6h7m (x2 over 6h8m) kubelet Container image "k8s.gcr.io/ingress-nginx/controller:v1.2.1@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8" already present on machine
Normal Created 6h7m (x2 over 6h8m) kubelet Created container controller
Warning Unhealthy 6h7m (x7 over 6h8m) kubelet Readiness probe failed: Get "http://172.17.0.6:10254/healthz": dial tcp 172.17.0.6:10254: connect: connection refused
Normal Started 6h7m (x2 over 6h8m) kubelet Started container controller
Normal RELOAD 6h7m nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal SandboxChanged 5h41m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 5h40m kubelet Container image "k8s.gcr.io/ingress-nginx/controller:v1.2.1@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8" already present on machine
Normal Created 5h40m kubelet Created container controller
Normal Started 5h40m kubelet Started container controller
Normal RELOAD 4h55m (x4 over 5h40m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal SandboxChanged 3m30s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 2m58s kubelet Container image "k8s.gcr.io/ingress-nginx/controller:v1.2.1@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8" already present on machine
Normal Created 2m43s kubelet Created container controller
Normal Started 2m31s kubelet Started container controller
Warning Unhealthy 2m (x3 over 2m20s) kubelet Liveness probe failed: Get "http://172.17.0.7:10254/healthz": dial tcp 172.17.0.7:10254: connect: connection refused
Warning Unhealthy 2m (x4 over 2m20s) kubelet Readiness probe failed: Get "http://172.17.0.7:10254/healthz": dial tcp 172.17.0.7:10254: connect: connection refused
Warning RELOAD 114s nginx-ingress-controller Error reloading NGINX: exit status 1
2022/12/01 20:51:59 [warn] 32#32: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
2022/12/01 20:51:59 [warn] 32#32: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
2022/12/01 20:51:59 [warn] 32#32: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
2022/12/01 20:51:59 [notice] 32#32: signal process started
2022/12/01 20:51:59 [error] 32#32: invalid PID number "" in "/tmp/nginx/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx/nginx.pid"
Warning RELOAD 113s nginx-ingress-controller Error reloading NGINX: exit status 1
2022/12/01 20:51:59 [warn] 34#34: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
2022/12/01 20:51:59 [warn] 34#34: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
2022/12/01 20:51:59 [warn] 34#34: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
2022/12/01 20:51:59 [notice] 34#34: signal process started
2022/12/01 20:51:59 [error] 34#34: invalid PID number "" in "/tmp/nginx/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx/nginx.pid"
Warning Unhealthy 110s kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 110s kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Normal RELOAD 110s nginx-ingress-controller NGINX reload triggered due to a change in configuration
(base) andrea@andrea-ThinkPad-S430:~$ kubectl port-forward rabbitmq-0 15672:15672
Forwarding from 127.0.0.1:15672 -> 15672
Forwarding from [::1]:15672 -> 15672
now it works but even without running minikube tunnel, so really not sure what's happening ... Anyway Thanks a lot :)
When you are using kubectl port-forward rabbitmq-0 15672:15672
you are circumventing the ingress. In other words, you aren't going through the ingress, you are going directly to the pod. So you still need to figure out why your ingress isn't working.
When you create the ingress resource and then run minikube tunnel, that should create an entry-point (via ingress) that allows you to connect to the rabbitmq-manager via localhost, which in your etc/host file you've mapped to rabbitmq-manager.com. I hope that makes sense.
in stackoverflow I found that the problem could be related to not having any external-ip
(base) andrea@andrea-ThinkPad-S430:~/MEGA/Microservices/system_design/python/src/converter$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.111.81.207 <none> 80:31942/TCP,443:31132/TCP 8d
ingress-nginx-controller-admission ClusterIP 10.98.134.246 <none> 443/TCP 8d
might be the case?
I did not get into the details but apparently it's an issue with Docker, that can be fixed by reinstalling it and deploy again all the pods in a clean minikube. Indeed the ingress works now :) Thanks for the help
Awesome! Good research. Nice work finding and solving the problem. Glad you got it figured out!
Hello @andreariba Actually I am facing the same issue as you in Arch Linux. It would be very helpful for me if you could help me by elaborating on how you solved the issue. And I am a beginner in these stuff myself so please detail them as much as possible. So sorry for this disturbance but please do help !
Edit : I have already followed all the above mentioned steps and checked my manifest files thoroughly.
Hi @andreariba , I am facing the same issues that you faced, with the same output and also running on Ubuntu. I tried the work around that @selikapro suggested, which allows me to access RabbitMQ, but not via the ingress that is defined. I have followed the steps exactly... still to no success.
I wonder if you could share the source that help you fix the issue. Did you do an uninstall of Docker and minikube to get the whole thing working? Or did you do a full clean/reset of your machine to get this working?
Thanks a lot in advanced for any help!
Cheers!
Ok...
Here was the answer: " Minikube supports ingress differently on the Mac and Linux.
On Linux the ingress is fully supported and therefore does not need the use of minikube tunnel.
On Mac there is an open issue due to a network issue. The documentation states that the minikube ingress addon is not supported, but I argue that that is highly misleading if not incorrect. It's just supported differently (and not as well).
On both Mac and Linux minikube addons enable ingress is required. Enabling the ingress addon on Mac shows that the ingress will be available on 127.0.0.1 as shown in the screenshot, whereas Linux will make it available by the minikube ip. Then, when we start minikube tunnel on the Mac it will connect to the ingress like any other exposed service. "
Which you can find here: https://stackoverflow.com/questions/70961901/ingress-with-minikube-working-differently-on-mac-vs-ubuntu-when-to-set-etc-host
I could indeed access the RabbitMQ manager dashboard without using "minikube tunnel" and without the workaround solution. Just by specifying the minikube ip in the /etc/hosts file instead of 127.0.0.1
Cheers!
Hi,
I'm following the video on youtube and run successfully all the commands until the third hour when you access the rabbitmq running on localhost.
I cannot access it and my minikube tunnel output looks like:
Connecting to http://rabbitmq-manager.com/ returns "This site canβt be reached". Any idea why this is happening? I'm new to kubernetes and quite stuck by this issue
Thanks a lot
Andrea