istio / istio

Connect, secure, control, and observe services.
https://istio.io
Apache License 2.0
35.89k stars 7.74k forks source link

RabbitMQ Crashes with Mutual TLS Enabled #6828

Closed kevincvlam closed 5 years ago

kevincvlam commented 6 years ago

Describe the bug RabbitMQ fails to start on a Kubernetes Cluster hosted on GKE at version 1.9.6-gke.1 with Istio installed at v. 0.8.0.

It crashes with an error:

ERROR: epmd error for host rabbitmq-596f977747-k58bb: timeout (timed out)'

Note that we were able to set-up and test mutual tls in a namespace as outlined in the demo at https://istio.io/docs/tasks/security/authn-policy/#enable-mutual-tls-for-all-services-in-a-namespace and things worked as expected.

Also note that RabbitMQ doesn't crash when mtls is disabled.

Expected behavior RabbitMQ to initialize and begin running correctly.

Steps to reproduce the bug Create a GKE cluster at 1.9.6-gke.1 and install istio, automatic sidecar injection, as per instructions here:

https://istio.io/docs/setup/kubernetes/helm-install/#option-1-install-with-helm-via-helm-template

Create a namespace and enable istio injection:

kubectl create ns foo && kubectl label namespace foo istio-injection=enabled

Create policies and destination rules for this namespace as per: https://istio.io/docs/tasks/security/authn-policy/#enable-mutual-tls-for-all-services-in-a-namespace

cat <<EOF | istioctl create -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "example-1"
  namespace: "foo"
spec:
  peers:
  - mtls:
EOF
cat <<EOF | istioctl create -f -
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "example-1"
  namespace: "foo"
spec:
  host: "*.foo.svc.cluster.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
EOF

And then deploy the following yaml:

apiVersion: v1
kind: Service
metadata:
  name: rabbitmq
spec:
  selector:
    app: rabbitmq
  ports:
  - name: node
    protocol: TCP
    port: 5672
    targetPort: node
  - name: management
    protocol: TCP
    port: 15672
    targetPort: management
  - name: epmd
    port: 4369
    protocol: TCP
    targetPort: epmd
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rabbitmq
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      restartPolicy: Always
      containers:
      - name: rabbitmq
        image: rabbitmq:3-management
        ports:
        - name: management
          containerPort: 15672
        - name: node
          containerPort: 5672
        - name: epmd
          containerPort: 4369
        env:
        - name: RABBITMQ_DEFAULT_USER
          value: *****
        - name: RABBITMQ_DEFAULT_PASS
          value: *****
        - name: RABBITMQ_DEFAULT_VHOST
          value: *****

Observe RabbitMQ crash.

Version GKE 1.9.6-gke.1 Istio 0.8.0

Is Istio Auth enabled or not? Not enabled, used: helm template install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio.yaml to install.

Environment GKE 1.9.6-gke.1 Istio 0.8.0

quanjielin commented 6 years ago

Thanks for reporting the issue, was able to repro this, added more failure logs below; looks like a similar error mentioned https://github.com/istio/istio/issues/5989

2018-07-03 22:38:39.085 [info] <0.33.0> Application lager started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.562 [info] <0.33.0> Application jsx started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.720 [info] <0.33.0> Application mnesia started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.721 [info] <0.33.0> Application crypto started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.721 [info] <0.33.0> Application recon started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.721 [info] <0.33.0> Application cowlib started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.729 [info] <0.33.0> Application os_mon started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.729 [info] <0.33.0> Application xmerl started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.832 [info] <0.33.0> Application inets started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.833 [info] <0.33.0> Application asn1 started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.833 [info] <0.33.0> Application public_key started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.927 [info] <0.33.0> Application ssl started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.935 [info] <0.33.0> Application ranch started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.939 [info] <0.33.0> Application cowboy started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.939 [info] <0.33.0> Application ranch_proxy_protocol started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.939 [info] <0.33.0> Application rabbit_common started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:39.954 [info] <0.197.0> 
 Starting RabbitMQ 3.7.6 on Erlang 20.3.8
 Copyright (C) 2007-2018 Pivotal Software, Inc.
 Licensed under the MPL.  See http://www.rabbitmq.com/

  ##  ##
  ##  ##      RabbitMQ 3.7.6. Copyright (C) 2007-2018 Pivotal Software, Inc.
  ##########  Licensed under the MPL.  See http://www.rabbitmq.com/
  ######  ##
  ##########  Logs: <stdout>

              Starting broker...
2018-07-03 22:38:39.973 [info] <0.197.0> 
 node           : rabbit@rabbitmq-6775975bc5-l4qw9
 home dir       : /var/lib/rabbitmq
 config file(s) : /etc/rabbitmq/rabbitmq.conf
 cookie hash    : LChiOgQ/vlEku9FPookApA==
 log(s)         : <stdout>
 database dir   : /var/lib/rabbitmq/mnesia/rabbit@rabbitmq-6775975bc5-l4qw9
2018-07-03 22:38:43.290 [info] <0.205.0> Memory high watermark set to 2994 MiB (3139492249 bytes) of 7485 MiB (7848730624 bytes) total
2018-07-03 22:38:43.296 [info] <0.207.0> Enabling free disk space monitoring
2018-07-03 22:38:43.296 [info] <0.207.0> Disk free limit set to 50MB
2018-07-03 22:38:43.301 [info] <0.209.0> Limiting to approx 1048476 file handles (943626 sockets)
2018-07-03 22:38:43.301 [info] <0.210.0> FHC read buffering:  OFF
2018-07-03 22:38:43.302 [info] <0.210.0> FHC write buffering: ON
2018-07-03 22:38:43.303 [info] <0.197.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@rabbitmq-6775975bc5-l4qw9 is empty. Assuming we need to join an existing cluster or initialise from scratch...
2018-07-03 22:38:43.303 [info] <0.197.0> Configured peer discovery backend: rabbit_peer_discovery_classic_config
2018-07-03 22:38:43.303 [info] <0.197.0> Will try to lock with peer discovery backend rabbit_peer_discovery_classic_config
2018-07-03 22:38:43.303 [info] <0.197.0> Peer discovery backend does not support locking, falling back to randomized delay
2018-07-03 22:38:43.303 [info] <0.197.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping randomized startup delay.
2018-07-03 22:38:43.303 [info] <0.197.0> All discovered existing cluster peers: 
2018-07-03 22:38:43.303 [info] <0.197.0> Discovered no peer nodes to cluster with
2018-07-03 22:38:43.306 [info] <0.33.0> Application mnesia exited with reason: stopped
2018-07-03 22:38:43.326 [info] <0.33.0> Application mnesia started on node 'rabbit@rabbitmq-6775975bc5-l4qw9'
2018-07-03 22:38:43.422 [info] <0.197.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2018-07-03 22:38:43.458 [info] <0.197.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2018-07-03 22:38:43.494 [info] <0.197.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2018-07-03 22:38:43.495 [info] <0.197.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
2018-07-03 22:38:43.496 [info] <0.197.0> Priority queues enabled, real BQ is rabbit_variable_queue
2018-07-03 22:38:43.498 [error] <0.372.0> CRASH REPORT Process <0.372.0> with 0 neighbours crashed with reason: no function clause matching rabbit_epmd_monitor:init_handle_port_please({error,{garbage_from_epmd,[21,3,1,0,2,2,70]}}, erl_epmd, "rabbit", "rabbitmq-6775975bc5-l4qw9") line 59
2018-07-03 22:38:43.498 [error] <0.371.0> Supervisor rabbit_epmd_monitor_sup had child rabbit_epmd_monitor started with rabbit_epmd_monitor:start_link() at undefined exit with reason no function clause matching rabbit_epmd_monitor:init_handle_port_please({error,{garbage_from_epmd,[21,3,1,0,2,2,70]}}, erl_epmd, "rabbit", "rabbitmq-6775975bc5-l4qw9") line 59 in context start_error
2018-07-03 22:38:43.499 [error] <0.196.0> CRASH REPORT Process <0.196.0> with 0 neighbours exited with reason: {error,{{shutdown,{failed_to_start_child,rabbit_epmd_monitor,{function_clause,[{rabbit_epmd_monitor,init_handle_port_please,[{error,{garbage_from_epmd,[21,3,1,0,2,2,70]}},erl_epmd,"rabbit","rabbitmq-6775975bc5-l4qw9"],[{file,"src/rabbit_epmd_monitor.erl"},{line,59}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,365}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,333}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}},{child,undefined,rabbit_epmd_monitor_sup,...}}} in application_master:init/4 line 134
2018-07-03 22:38:43.500 [info] <0.33.0> Application rabbit exited with reason: {error,{{shutdown,{failed_to_start_child,rabbit_epmd_monitor,{function_clause,[{rabbit_epmd_monitor,init_handle_port_please,[{error,{garbage_from_epmd,[21,3,1,0,2,2,70]}},erl_epmd,"rabbit","rabbitmq-6775975bc5-l4qw9"],[{file,"src/rabbit_epmd_monitor.erl"},{line,59}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,365}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,333}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}},{child,undefined,rabbit_epmd_monitor_sup,...}}}
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{error,{{shutdown,{failed_to_start_child,rabbit_epmd_monitor,{function_clause,[{rabbit_epmd_monitor,init_handle_port_please,[{error,{garbage_from_epmd,[21,3,1,0,2,2,70]}},erl_epmd,\"rabbit\",\"rabbitmq-6775975bc5-l4qw9\"],[{file,\"src/rabbit_epmd_monitor.erl\"},{line,59}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,365}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,333}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,247}]}]}}},{child,undefined,rabbit_epmd_monitor_sup,{rabbit_restartable_sup,start_link,[rabbit_epmd_monitor_sup,{rabbit_epmd_monitor,start_link,[]},false]},transient,infinity,supervisor,[rabbit_restartable_sup]}}}}}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{error,{{shutdown,{failed_to_start_child,rabbit_epmd_monitor,{function

Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
kevincvlam commented 6 years ago

Thanks for looking into it, @quanjielin and reproducing the error. Note that RabbitMQ doesn't crash when mutual TLS is disabled (eg. remove the policy and destination rule).

Do you have any ideas about what the issue is? Is it on the Istio side?

quanjielin commented 6 years ago

yes RabbitMQ didn't crash with mTLS disabled, the issue repro using latest master build as well, removed readiness and liveness probe(mentioned https://github.com/istio/istio/issues/5989 doesn't help in my env)

still looking, meanwhile + @ymesika @aneslozo who may have more context from https://github.com/istio/istio/issues/5989

aneslozo commented 6 years ago

@quanjielin It doesn't matter if I am using Istio with or without mTLS, I still have issue with RabbitMQ. If I uncomment livenessProbe and readinessProbe, it cannot start pod at all. If I comment them, it will start all 3 RabbitMQ pods, but I still have issue.

These three pods are not connected in cluster. Each of them are in standalone mode. Still looking what could block them to connect in cluster.

rabbitmq-0                                       2/2       Running   0          1d
rabbitmq-1                                       2/2       Running   0          1d
rabbitmq-2                                       2/2       Running   0          1d

####################

2018-07-05 14:59:07.643 [info] <0.201.0> Configured peer discovery backend: rabbit_peer_discovery_k8s
2018-07-05 14:59:07.643 [info] <0.201.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s
2018-07-05 14:59:07.643 [info] <0.201.0> Peer discovery backend does not support locking, falling back to randomized delay
2018-07-05 14:59:07.643 [info] <0.201.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized startup delay.
2018-07-05 14:59:07.780 [info] <0.201.0> All discovered existing cluster peers: rabbit@10.60.4.178, rabbit@10.60.3.134, rabbit@10.60.5.150
2018-07-05 14:59:07.780 [info] <0.201.0> Peer nodes we can cluster with: rabbit@10.60.3.134, rabbit@10.60.5.150
2018-07-05 14:59:07.789 [warning] <0.201.0> Could not auto-cluster with node rabbit@10.60.3.134: {badrpc,nodedown}
2018-07-05 14:59:07.790 [warning] <0.201.0> Could not auto-cluster with node rabbit@10.60.5.150: {badrpc,nodedown}
2018-07-05 14:59:07.790 [warning] <0.201.0> Could not successfully contact any node of: rabbit@10.60.3.134,rabbit@10.60.5.150 (as in Erlang distribution). Starting as a blank standalone node...
2018-07-05 14:59:07.795 [info] <0.33.0> Application mnesia exited with reason: stopped
2018-07-05 14:59:07.943 [info] <0.33.0> Application mnesia started on node 'rabbit@10.60.4.178'

Please take look this one #5037.

sdake commented 6 years ago

Note sure if this is helpful, but EPMD has caused me all sorts of hassle in the past.

See: https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2#L7-L17

klarose commented 6 years ago

I ran into two issues today related to this.

  1. Rabbitmq didn't work with mTLS disabled. Fixed this by adding epmd to rabbitmq's Service.
  2. Rabbitmq didn't work with mTLS enabled, even with epmd being in the service.

I think the root cause of these is the same: Some of rabbitmq's communication with epmd happens with the pod ip, not 127.0.0.1. Why does this matter? Because the iptables for redirecting traffic to envoy exclude "to localhost" if it's out the loopback interface, but not if it's from the pod interface.

The following iptables-save output shows this

:PREROUTING ACCEPT [0:0] :INPUT ACCEPT [10:600] :OUTPUT ACCEPT [2705:254663] :POSTROUTING ACCEPT [2811:261023] :ISTIO_INBOUND - [0:0] :ISTIO_OUTPUT - [0:0] :ISTIO_REDIRECT - [0:0] -A PREROUTING -p tcp -j ISTIO_INBOUND -A OUTPUT -p tcp -j ISTIO_OUTPUT -A ISTIO_INBOUND -p tcp -m tcp --dport 5672 -j ISTIO_REDIRECT -A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -j ISTIO_REDIRECT -A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN -A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN -A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN -A ISTIO_OUTPUT -j ISTIO_REDIRECT -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001

The following dump from a capture shows this. Note how the capture starts with communication to and from 127.0.0.1. This works.

Next, however, a connection is initiated from 192.168.1.134 (the rabbitmq pod IP) back to 127.0.0.1. rabbit.txt. It eventually resets. This isn't good! When the port isn't added to the Service, istio rightly locks it down and prevents communication. When mTLS is enabled, I suspect the reason the connetion fails is that the TLS handshake fails, since half of the connection isn't actually terminated by an envoy proxy.

If I disabled mTLS for the epmd port for rabbitmq, it ends up working.

apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "disable-mtls-epmd"
spec:
  targets:
  - name: rabbitmq
    ports:
    - number: 4369 
  peers:

However, that's not a very elegant solution. While I understand that rabbitmq is doing something weird here, is it really doing something that istio should be breaking?

Should istio's iptables rules be updated to exclude redirection of this traffic?

(Note that I may be slightly incorrect in my analysis of exactly why the traffic is redirected to envoy, but clearly that leads to rabbitmq failing, so I suspect that the fix is to prevent that redirect from happening).

stale[bot] commented 6 years ago

This issue has been automatically marked as stale because it has not had activity in the last 90 days. It will be closed in the next 30 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

stale[bot] commented 5 years ago

This issue has been automatically closed because it has not had activity in the last month and a half. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

arielb135 commented 5 years ago

I ran into two issues today related to this.

  1. Rabbitmq didn't work with mTLS disabled. Fixed this by adding epmd to rabbitmq's Service.
  2. Rabbitmq didn't work with mTLS enabled, even with epmd being in the service.

I think the root cause of these is the same: Some of rabbitmq's communication with epmd happens with the pod ip, not 127.0.0.1. Why does this matter? Because the iptables for redirecting traffic to envoy exclude "to localhost" if it's out the loopback interface, but not if it's from the pod interface.

The following iptables-save output shows this

:PREROUTING ACCEPT [0:0] :INPUT ACCEPT [10:600] :OUTPUT ACCEPT [2705:254663] :POSTROUTING ACCEPT [2811:261023] :ISTIO_INBOUND - [0:0] :ISTIO_OUTPUT - [0:0] :ISTIO_REDIRECT - [0:0] -A PREROUTING -p tcp -j ISTIO_INBOUND -A OUTPUT -p tcp -j ISTIO_OUTPUT -A ISTIO_INBOUND -p tcp -m tcp --dport 5672 -j ISTIO_REDIRECT -A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -j ISTIO_REDIRECT -A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN -A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN -A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN -A ISTIO_OUTPUT -j ISTIO_REDIRECT -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001

The following dump from a capture shows this. Note how the capture starts with communication to and from 127.0.0.1. This works.

Next, however, a connection is initiated from 192.168.1.134 (the rabbitmq pod IP) back to 127.0.0.1. rabbit.txt. It eventually resets. This isn't good! When the port isn't added to the Service, istio rightly locks it down and prevents communication. When mTLS is enabled, I suspect the reason the connetion fails is that the TLS handshake fails, since half of the connection isn't actually terminated by an envoy proxy.

If I disabled mTLS for the epmd port for rabbitmq, it ends up working.

apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "disable-mtls-epmd"
spec:
  targets:
  - name: rabbitmq
    ports:
    - number: 4369 
  peers:

However, that's not a very elegant solution. While I understand that rabbitmq is doing something weird here, is it really doing something that istio should be breaking?

Should istio's iptables rules be updated to exclude redirection of this traffic?

(Note that I may be slightly incorrect in my analysis of exactly why the traffic is redirected to envoy, but clearly that leads to rabbitmq failing, so I suspect that the fix is to prevent that redirect from happening).

Something that puzzles me and would love explanation if you have one, i've excluded the epmd from the headless service, everything works. When running and checking one of the rabbit pods: istioctl authn tls-check carabbitmq-0.arielrabbit i see that there's indeed mtls connection to port 4369 from the pod to itself (and to other pods in the cluster):

but to the headless service (carabbitmq-discovery) there's HTTP connection (As it was disabled) - how is this possible?


HOST:PORT                                                                                  STATUS     SERVER     CLIENT     AUTHN POLICY                                 DESTINATION RULE
carabbitmq-0.carabbitmq-discovery.arielrabbit.svc.cluster.local:4369                       OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-0.carabbitmq-discovery.arielrabbit.svc.cluster.local:5671                       OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-0.carabbitmq-discovery.arielrabbit.svc.cluster.local:5672                       OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-0.carabbitmq-discovery.arielrabbit.svc.cluster.local:9419                       OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-0.carabbitmq-discovery.arielrabbit.svc.cluster.local:15672                      OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-0.carabbitmq-discovery.arielrabbit.svc.cluster.local:25672                      OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-1.carabbitmq-discovery.arielrabbit.svc.cluster.local:4369                       OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-1.carabbitmq-discovery.arielrabbit.svc.cluster.local:5671                       OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-1.carabbitmq-discovery.arielrabbit.svc.cluster.local:5672                       OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-1.carabbitmq-discovery.arielrabbit.svc.cluster.local:9419                       OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-1.carabbitmq-discovery.arielrabbit.svc.cluster.local:15672                      OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-1.carabbitmq-discovery.arielrabbit.svc.cluster.local:25672                      OK         mTLS       mTLS       default/arielrabbit                          carabbitmq-mtls-per-pod/arielrabbit
carabbitmq-discovery.arielrabbit.svc.cluster.local:4369                                    OK         HTTP       HTTP       carabbitmq-disable-mtls/arielrabbit          carabbitmq-mtls-discovery/arielrabbit
dhanvi commented 5 years ago

This issue has been automatically closed because it has not had activity in the last month and a half. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

@sdake can you please open this issue by marking it as help wanted ?

daudn commented 5 years ago

Help wanted! My rabbitmq was running just fine, and all of a sudden it crashed and now I'm getting the error: ERROR: epmd error for host rabbitmq-0: timeout (timed out)'