NBISweden / LocalEGA

Please go to to https://github.com/EGA-archive/LocalEGA instead
Apache License 2.0
4 stars 1 forks source link

[Bug][Floating] RabbitMQ queues fail within Docker on some host OS #225

Closed dtitov closed 6 years ago

dtitov commented 6 years ago

Floating bug: on some platforms (I got it on Ubuntu and CentOS) Docker deployment is not stable, meaning mq_swe1 container doesn't start all the internal queues up. This, obsiously, fails the ingestion.

Correct RabbitMQ:

Failing RabbitMQ:

Log with the error:

Attaching to ega_mq_swe1
mq_swe1_1             | + [[ -z swe1 ]]
mq_swe1_1             | + [[ -z HYgqHd1fvVqTTplL ]]
mq_swe1_1             | + cat
mq_swe1_1             | + chown rabbitmq:rabbitmq /etc/rabbitmq/defs-cega.json
mq_swe1_1             | + chmod 640 /etc/rabbitmq/defs-cega.json
mq_swe1_1             | + chown -R rabbitmq /var/lib/rabbitmq
mq_swe1_1             | + exec rabbitmq-server
mq_swe1_1             | + sleep 5
mq_swe1_1             | 2017-12-18 21:14:10.803 [info] <0.33.0> Application lager started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:11.811 [info] <0.33.0> Application crypto started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.029 [info] <0.33.0> Application mnesia started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.124 [info] <0.33.0> Application inets started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.124 [info] <0.33.0> Application amqp10_common started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.126 [info] <0.33.0> Application recon started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.127 [info] <0.33.0> Application cowlib started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.127 [info] <0.33.0> Application xmerl started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.127 [info] <0.33.0> Application jsx started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.135 [info] <0.33.0> Application os_mon started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.135 [info] <0.33.0> Application asn1 started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.136 [info] <0.33.0> Application public_key started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.224 [info] <0.33.0> Application ssl started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.231 [info] <0.33.0> Application ranch started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.231 [info] <0.33.0> Application ranch_proxy_protocol started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.234 [info] <0.33.0> Application cowboy started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.234 [info] <0.33.0> Application rabbit_common started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:12.245 [info] <0.189.0> 
mq_swe1_1             |  Starting RabbitMQ 3.7.0 on Erlang 20.1.7
mq_swe1_1             |  Copyright (C) 2007-2017 Pivotal Software, Inc.
mq_swe1_1             |  Licensed under the MPL.  See http://www.rabbitmq.com/
mq_swe1_1             | 
mq_swe1_1             |   ##  ##
mq_swe1_1             |   ##  ##      RabbitMQ 3.7.0. Copyright (C) 2007-2017 Pivotal Software, Inc.
mq_swe1_1             |   ##########  Licensed under the MPL.  See http://www.rabbitmq.com/
mq_swe1_1             |   ######  ##
mq_swe1_1             |   ##########  Logs: <stdout>
mq_swe1_1             | 
mq_swe1_1             |               Starting broker...
mq_swe1_1             | 2017-12-18 21:14:12.259 [info] <0.189.0> 
mq_swe1_1             |  node           : rabbit@ega_mq
mq_swe1_1             |  home dir       : /var/lib/rabbitmq
mq_swe1_1             |  config file(s) : /etc/rabbitmq/rabbitmq.config
mq_swe1_1             |  cookie hash    : Zs4Fs6vjJS1dRvmM86efkw==
mq_swe1_1             |  log(s)         : <stdout>
mq_swe1_1             |  database dir   : /var/lib/rabbitmq/mnesia/rabbit@ega_mq
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + sleep 1
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + sleep 1
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + sleep 1
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + sleep 1
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + sleep 1
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + sleep 1
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + sleep 1
mq_swe1_1             | 2017-12-18 21:14:20.868 [info] <0.197.0> Memory high watermark set to 3094 MiB (3244384256 bytes) of 7735 MiB (8110960640 bytes) total
mq_swe1_1             | 2017-12-18 21:14:20.917 [info] <0.199.0> Enabling free disk space monitoring
mq_swe1_1             | 2017-12-18 21:14:20.917 [info] <0.199.0> Disk free limit set to 1000MB
mq_swe1_1             | 2017-12-18 21:14:20.920 [info] <0.201.0> Limiting to approx 1048476 file handles (943626 sockets)
mq_swe1_1             | 2017-12-18 21:14:20.920 [info] <0.202.0> FHC read buffering:  OFF
mq_swe1_1             | 2017-12-18 21:14:20.920 [info] <0.202.0> FHC write buffering: ON
mq_swe1_1             | 2017-12-18 21:14:20.952 [info] <0.189.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@ega_mq is empty. Assuming we need to join an existing cluster or initialise from scratch...
mq_swe1_1             | 2017-12-18 21:14:20.952 [info] <0.189.0> Configured peer discovery backend: rabbit_peer_discovery_classic_config
mq_swe1_1             | 2017-12-18 21:14:20.952 [info] <0.189.0> Will try to lock with peer discovery backend rabbit_peer_discovery_classic_config
mq_swe1_1             | 2017-12-18 21:14:20.953 [info] <0.189.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping randomized startup delay.
mq_swe1_1             | 2017-12-18 21:14:20.953 [info] <0.189.0> All discovered existing cluster peers: 
mq_swe1_1             | 2017-12-18 21:14:20.953 [info] <0.189.0> Discovered no peer nodes to cluster with
mq_swe1_1             | 2017-12-18 21:14:20.955 [info] <0.33.0> Application mnesia exited with reason: stopped
mq_swe1_1             | 2017-12-18 21:14:20.978 [info] <0.33.0> Application mnesia started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:21.194 [info] <0.189.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
mq_swe1_1             | 2017-12-18 21:14:21.362 [info] <0.189.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
mq_swe1_1             | 2017-12-18 21:14:21.416 [info] <0.189.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
mq_swe1_1             | 2017-12-18 21:14:21.417 [info] <0.189.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
mq_swe1_1             | 2017-12-18 21:14:21.418 [info] <0.189.0> Priority queues enabled, real BQ is rabbit_variable_queue
mq_swe1_1             | 2017-12-18 21:14:21.520 [info] <0.376.0> Starting rabbit_node_monitor
mq_swe1_1             | 2017-12-18 21:14:21.574 [info] <0.189.0> message_store upgrades: 1 to apply
mq_swe1_1             | 2017-12-18 21:14:21.574 [info] <0.189.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store
mq_swe1_1             | 2017-12-18 21:14:21.574 [info] <0.189.0> message_store upgrades: No durable queues found. Skipping message store migration
mq_swe1_1             | 2017-12-18 21:14:21.574 [info] <0.189.0> message_store upgrades: Removing the old message store data
mq_swe1_1             | 2017-12-18 21:14:21.575 [info] <0.189.0> message_store upgrades: All upgrades applied successfully
mq_swe1_1             | 2017-12-18 21:14:21.716 [info] <0.189.0> Management plugin: using rates mode 'basic'
mq_swe1_1             | 2017-12-18 21:14:21.719 [info] <0.189.0> Applying definitions from: /etc/rabbitmq/defs.json
mq_swe1_1             | 2017-12-18 21:14:21.719 [info] <0.189.0> Asked to import definitions. Acting user: <<"rmq-internal">>
mq_swe1_1             | 2017-12-18 21:14:21.720 [info] <0.189.0> Importing users...
mq_swe1_1             | 2017-12-18 21:14:21.720 [info] <0.189.0> Creating user 'guest'
mq_swe1_1             | 2017-12-18 21:14:21.736 [info] <0.189.0> Setting user tags for user 'guest' to [administrator]
mq_swe1_1             | 2017-12-18 21:14:21.741 [info] <0.189.0> Importing vhosts...
mq_swe1_1             | 2017-12-18 21:14:21.742 [info] <0.189.0> Adding vhost '/'
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + sleep 1
mq_swe1_1             | 2017-12-18 21:14:21.817 [info] <0.421.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@ega_mq/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
mq_swe1_1             | 2017-12-18 21:14:21.872 [info] <0.421.0> Starting message stores for vhost '/'
mq_swe1_1             | 2017-12-18 21:14:21.873 [info] <0.425.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
mq_swe1_1             | 2017-12-18 21:14:21.894 [info] <0.421.0> Started message store of type transient for vhost '/'
mq_swe1_1             | 2017-12-18 21:14:21.895 [info] <0.428.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
mq_swe1_1             | 2017-12-18 21:14:21.898 [warning] <0.428.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch
mq_swe1_1             | 2017-12-18 21:14:21.901 [info] <0.421.0> Started message store of type persistent for vhost '/'
mq_swe1_1             | 2017-12-18 21:14:21.905 [info] <0.189.0> Importing user permissions...
mq_swe1_1             | 2017-12-18 21:14:21.905 [info] <0.189.0> Setting permissions for 'guest' in '/' to '.*', '.*', '.*'
mq_swe1_1             | 2017-12-18 21:14:21.917 [info] <0.189.0> Importing topic permissions...
mq_swe1_1             | 2017-12-18 21:14:21.917 [info] <0.189.0> Importing paramteres...
mq_swe1_1             | 2017-12-18 21:14:21.917 [info] <0.189.0> Importing global parameters...
mq_swe1_1             | 2017-12-18 21:14:21.922 [info] <0.189.0> Importing policies...
mq_swe1_1             | 2017-12-18 21:14:21.922 [info] <0.189.0> Importing queues...
mq_swe1_1             | 2017-12-18 21:14:21.989 [info] <0.189.0> Importing exchanges...
mq_swe1_1             | 2017-12-18 21:14:21.995 [info] <0.189.0> Importing bindings...
mq_swe1_1             | 2017-12-18 21:14:22.343 [warning] <0.476.0> Default virtual host '/' not found; exchange 'amq.rabbitmq.log' disabled
mq_swe1_1             | 2017-12-18 21:14:22.345 [info] <0.491.0> started TCP Listener on [::]:5672
mq_swe1_1             | 2017-12-18 21:14:22.350 [info] <0.189.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.358 [info] <0.189.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.358 [info] <0.33.0> Application rabbit started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.358 [info] <0.33.0> Application rabbitmq_web_dispatch started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.359 [info] <0.33.0> Application amqp_client started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.359 [info] <0.33.0> Application rabbitmq_federation started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.359 [info] <0.33.0> Application rabbitmq_amqp1_0 started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.362 [info] <0.33.0> Application rabbitmq_management_agent started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.363 [info] <0.33.0> Application amqp10_client started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.364 [info] <0.33.0> Application rabbitmq_shovel started on node rabbit@ega_mq
mq_swe1_1             | 2017-12-18 21:14:22.412 [info] <0.569.0> Management plugin started. Port: 15672
mq_swe1_1             | 2017-12-18 21:14:22.412 [info] <0.675.0> Statistics database started.
mq_swe1_1             | + nc -z 127.0.0.1 15672
mq_swe1_1             | + ROUND=30
mq_swe1_1             | + curl -X POST -u guest:guest -H 'Content-Type: application/json' --data @/etc/rabbitmq/defs-cega.json http://127.0.0.1:15672/api/definitions
mq_swe1_1             | 2017-12-18 21:14:23.049 [warning] <0.32.0> lager_error_logger_h dropped 17 messages in the last second that exceeded the limit of 100 messages/sec
mq_swe1_1             |  completed with 8 plugins.
mq_swe1_1             | 2017-12-18 21:14:23.072 [info] <0.5.0> Server startup complete; 8 plugins started.
mq_swe1_1             |  * rabbitmq_shovel_management
mq_swe1_1             |  * rabbitmq_federation_management
mq_swe1_1             |  * rabbitmq_management
mq_swe1_1             |  * rabbitmq_shovel
mq_swe1_1             |  * rabbitmq_management_agent
mq_swe1_1             |  * rabbitmq_amqp1_0
mq_swe1_1             |  * rabbitmq_federation
mq_swe1_1             |  * rabbitmq_web_dispatch
mq_swe1_1             |   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
mq_swe1_1             |                                  Dload  Upload   Total   Spent    Left  Speed
mq_swe1_1             | 2017-12-18 21:14:24.101 [info] <0.686.0> Asked to import definitions. Acting user: <<"guest">>
mq_swe1_1             | 2017-12-18 21:14:24.101 [info] <0.686.0> Importing users...
mq_swe1_1             | 2017-12-18 21:14:24.101 [info] <0.686.0> Importing vhosts...
mq_swe1_1             | 2017-12-18 21:14:24.101 [info] <0.686.0> Importing user permissions...
mq_swe1_1             | 2017-12-18 21:14:24.101 [info] <0.686.0> Importing topic permissions...
mq_swe1_1             | 2017-12-18 21:14:24.101 [info] <0.686.0> Importing paramteres...
100  1177    0     0  100  1177      0   1173  0:00:01  0:00:01 --:--:--  1174
mq_swe1_1             | 2017-12-18 21:14:24.102 [error] <0.686.0> CRASH REPORT Process <0.686.0> with 0 neighbours crashed with reason: bad argument in call to lists:keyfind(<<"src-protocol">>, 1, #{<<"ack-mode">> => <<"on-confirm">>,<<"add-forward-headers">> => false,<<"delete-after">> => <<"...">>,...}) in rabbit_shovel_parameters:protocols/1 line 439
mq_swe1_1             | 2017-12-18 21:14:24.103 [error] <0.685.0> Ranch listener rabbit_web_dispatch_sup_15672, connection process <0.685.0>, stream 1 had its request process <0.686.0> exit with reason badarg and stacktrace [{lists,keyfind,[<<"src-protocol">>,1,#{<<"ack-mode">> => <<"on-confirm">>,<<"add-forward-headers">> => false,<<"delete-after">> => <<"never">>,<<"dest-exchange">> => <<"localega.v1">>,<<"dest-exchange-key">> => <<"swe1.errors">>,<<"dest-uri">> => <<"amqp://cega_swe1:HYgqHd1fvVqTTplL@cega_mq:5672/swe1">>,<<"src-exchange">> => <<"lega">>,<<"src-exchange-key">> => <<"lega.error.user">>,<<"src-uri">> => <<"amqp://">>}],[]},{rabbit_shovel_parameters,protocols,1,[{file,"src/rabbit_shovel_parameters.erl"},{line,439}]},{rabbit_shovel_parameters,src_validation,2,[{file,"src/rabbit_shovel_parameters.erl"},{line,111}]},{rabbit_shovel_parameters,validate,5,[{file,"src/rabbit_shovel_parameters.erl"},{line,44}]},{rabbit_runtime_parameters,set_any0,5,[{file,"src/rabbit_runtime_parameters.erl"},{line,152}]},{rabbit_runtime_parameters,set_any,5,[{file,"src/rabbit_runtime_parameters.erl"},{line,143}]},{rabbit_mgmt_wm_definitions,add_parameter,2,[{file,"src/rabbit_mgmt_wm_definitions.erl"},{line,338}]},{rabbit_mgmt_wm_definitions,'-for_all/4-lc$^0/1-0-',3,[{file,"src/rabbit_mgmt_wm_definitions.erl"},{line,316}]}]
mq_swe1_1             | + echo 'Central EGA connections loaded'
mq_swe1_1             | Central EGA connections loaded
1 Comment Collaps

I can provide a CentOS instance in the cloud where this bug is reproducible constantly in case someone can't reproduce it.

silverdaz commented 6 years ago

A small comment while I was checking the picture, that came to my mind.

The definitions to connect CentralEGA and LocalEGA don't seem to be loaded. Therefore, it of course fails to retrieve message from CentralEGA for files to ingest. That bit is obvious. The comment is that I can see the user is guest, but if I recall correctly, we are passed that and a user/password pair is generated, even for the local broker. I don't believe it is the problem, because a fresh deployment does bootstrap properly, but maybe it's worth looking into that. That might give another idea to solve "Why are the definitions failing to load?".

juhtornr commented 6 years ago

I wasn't able to reproduce the problem. I created a Centos VM to cPoutan, did git clone, make images and docker-compose up and:

[cloud-user@86 docker]$ cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core) 
[cloud-user@86 docker]$ docker-compose ps
         Name                       Command               State                                      Ports                            
--------------------------------------------------------------------------------------------------------------------------------------
cega-mq                  docker-entrypoint.sh rabbi ...   Up      15671/tcp, 0.0.0.0:15670->15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 
cega-users               /cega/server.py                  Up      0.0.0.0:9100->80/tcp                                                
ega-db-fin1              docker-entrypoint.sh postgres    Up      5432/tcp                                                            
ega-db-swe1              docker-entrypoint.sh postgres    Up      5432/tcp                                                            
ega-elasticsearch-fin1   /usr/local/bin/docker-entr ...   Up      9200/tcp, 9300/tcp                                                  
ega-elasticsearch-swe1   /usr/local/bin/docker-entr ...   Up      9200/tcp, 9300/tcp                                                  
ega-frontend-fin1        ega-frontend                     Up      0.0.0.0:9001->80/tcp                                                
ega-frontend-swe1        ega-frontend                     Up      0.0.0.0:9000->80/tcp                                                
ega-inbox-fin1           entrypoint.sh                    Up      0.0.0.0:2223->22/tcp                                                
ega-inbox-swe1           entrypoint.sh                    Up      0.0.0.0:2222->22/tcp                                                
ega-keys-fin1            entrypoint.sh                    Up      9010/tcp, 9011/tcp                                                  
ega-keys-swe1            entrypoint.sh                    Up      9010/tcp, 9011/tcp                                                  
ega-kibana-fin1          /bin/bash /usr/local/bin/k ...   Up      0.0.0.0:5602->5601/tcp                                              
ega-kibana-swe1          /bin/bash /usr/local/bin/k ...   Up      0.0.0.0:5601->5601/tcp                                              
ega-logstash-fin1        /usr/local/bin/docker-entr ...   Up      5044/tcp, 9600/tcp                                                  
ega-logstash-swe1        /usr/local/bin/docker-entr ...   Up      5044/tcp, 9600/tcp                                                  
ega-mq-fin1              /usr/bin/ega-entrypoint.sh ...   Up      15671/tcp, 0.0.0.0:15673->15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 
ega-mq-swe1              /usr/bin/ega-entrypoint.sh ...   Up      15671/tcp, 0.0.0.0:15672->15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 
ega-vault-fin1           entrypoint.sh                    Up                                                                          
ega-vault-swe1           entrypoint.sh                    Up                                                                          
ega_ingest-fin1_1        entrypoint.sh                    Up                                                                          
ega_ingest-swe1_1        entrypoint.sh                    Up                          

Can you @dtitov provide access to a VM where you have this problem?

juhtornr commented 6 years ago

We noticed with @silverdaz that CEGA definitions are not loaded to ega-mq-* containers even these containers are up:

ega-mq-swe1           | 2018-01-05 10:39:29.919 [error] <0.710.0> CRASH REPORT Process <0.710.0> with 0 neighbours crashed with reason: bad arg
 in call to lists:keyfind(<<"src-protocol">>, 1, #{<<"ack-mode">> => <<"on-confirm">>,<<"add-forward-headers">> => false,<<"delete-after">> => 
.">>,...}) in rabbit_shovel_parameters:protocols/1 line 439
ega-mq-swe1           | 2018-01-05 10:39:29.919 [error] <0.709.0> Ranch listener rabbit_web_dispatch_sup_15672, connection process <0.709.0>, s
 1 had its request process <0.710.0> exit with reason badarg and stacktrace [{lists,keyfind,[<<"src-protocol">>,1,#{<<"ack-mode">> => <<"on-con
>>,<<"add-forward-headers">> => false,<<"delete-after">> => <<"never">>,<<"dest-exchange">> => <<"localega.v1">>,<<"dest-exchange-key">> => <<"
s">>,<<"dest-uri">> => <<"amqp://cega_swe1:LwAVtMZzWaXSRWdL@cega-mq:5672/swe1">>,<<"src-exchange">> => <<"lega">>,<<"src-exchange-key">> => <<"
error.user">>,<<"src-uri">> => <<"amqp://">>}],[]},{rabbit_shovel_parameters,protocols,1,[{file,"src/rabbit_shovel_parameters.erl"},{line,439}]
bbit_shovel_parameters,src_validation,2,[{file,"src/rabbit_shovel_parameters.erl"},{line,111}]},{rabbit_shovel_parameters,validate,5,[{file,"sr
bit_shovel_parameters.erl"},{line,44}]},{rabbit_runtime_parameters,set_any0,5,[{file,"src/rabbit_runtime_parameters.erl"},{line,152}]},{rabbit_
me_parameters,set_any,5,[{file,"src/rabbit_runtime_parameters.erl"},{line,143}]},{rabbit_mgmt_wm_definitions,add_parameter,2,[{file,"src/rabbit
_wm_definitions.erl"},{line,338}]},{rabbit_mgmt_wm_definitions,'-for_all/4-lc$^0/1-0-',3,[{file,"src/rabbit_mgmt_wm_definitions.erl"},{line,316
ega-mq-swe1           | Central EGA connections loaded

However the same setup seems to work when we downgraded RabbitMQ to version 3.6. Current RabbitMQ version is 3.7 so perhaps there is a change in configuration options (or format).

juhtornr commented 6 years ago

@silverdaz hard coded RabbitMQ version to 3.6 and everything works as a charm. I propose that we stick on that and update when we reason to.

dtitov commented 6 years ago

Great!

silverdaz commented 6 years ago

Resolved by PR #229