ansible-middleware / amq

A collection to manage AMQ brokers
Apache License 2.0
16 stars 11 forks source link

There are two static connectors created in here and only one of them is valid (node0), the other one is not configured. #62

Closed RobertFloor closed 10 months ago

RobertFloor commented 1 year ago
SUMMARY

The Ansible code creates an incorrect static-connectors. There are two static connectors created in here and only one of them is valid (node0), the other one is not configured. With a normal active passive cluster there needs to be only one node (the other broker) mentioned in the static-connector part

    <cluster-connections>
      <cluster-connection name="my-cluster">
        <connector-ref>artemis</connector-ref>
        <message-load-balancing>ON_DEMAND</message-load-balancing>
        <max-hops>1</max-hops>
        <static-connectors>
          <connector-ref>node0</connector-ref>
          <connector-ref>node1</connector-ref>
        </static-connectors>
      </cluster-connection>
    </cluster-connections>
    <ha-policy>
ISSUE TYPE
ANSIBLE VERSION
ansible [core 2.14.3]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/robert/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/linuxbrew/.linuxbrew/Cellar/ansible/7.3.0/libexec/lib/python3.11/site-packages/ansible
  ansible collection location = /home/robert/.ansible/collections:/usr/share/ansible/collections
  executable location = /home/linuxbrew/.linuxbrew/bin/ansible
  python version = 3.11.2 (main, Feb  7 2023, 13:52:42) [GCC 11.3.0] (/home/linuxbrew/.linuxbrew/Cellar/ansible/7.3.0/libexec/bin/python3.11)
  jinja version = 3.1.2
  libyaml = True
COLLECTION VERSION
# /home/linuxbrew/.linuxbrew/Cellar/ansible/7.3.0/libexec/lib/python3.11/site-packages/ansible_collections
Collection                    Version
----------------------------- -------
amazon.aws                    5.2.0
ansible.netcommon             4.1.0
ansible.posix                 1.5.1
ansible.utils                 2.9.0
ansible.windows               1.13.0
arista.eos                    6.0.0
awx.awx                       21.12.0
azure.azcollection            1.14.0
check_point.mgmt              4.0.0
chocolatey.chocolatey         1.4.0
cisco.aci                     2.4.0
cisco.asa                     4.0.0
cisco.dnac                    6.6.3
cisco.intersight              1.0.23
cisco.ios                     4.3.1
cisco.iosxr                   4.1.0
cisco.ise                     2.5.12
cisco.meraki                  2.15.1
cisco.mso                     2.2.1
cisco.nso                     1.0.3
cisco.nxos                    4.1.0
cisco.ucs                     1.8.0
cloud.common                  2.1.2
cloudscale_ch.cloud           2.2.4
community.aws                 5.2.0
community.azure               2.0.0
community.ciscosmb            1.0.5
community.crypto              2.11.0
community.digitalocean        1.23.0
community.dns                 2.5.1
community.docker              3.4.2
community.fortios             1.0.0
community.general             6.4.0
community.google              1.0.0
community.grafana             1.5.4
community.hashi_vault         4.1.0
community.hrobot              1.7.0
community.libvirt             1.2.0
community.mongodb             1.5.1
community.mysql               3.6.0
community.network             5.0.0
community.okd                 2.3.0
community.postgresql          2.3.2
community.proxysql            1.5.1
community.rabbitmq            1.2.3
community.routeros            2.7.0
community.sap                 1.0.0
community.sap_libs            1.4.0
community.skydive             1.0.0
community.sops                1.6.1
community.vmware              3.4.0
community.windows             1.12.0
community.zabbix              1.9.2
containers.podman             1.10.1
cyberark.conjur               1.2.0
cyberark.pas                  1.0.17
dellemc.enterprise_sonic      2.0.0
dellemc.openmanage            6.3.0
dellemc.os10                  1.1.1
dellemc.os6                   1.0.7
dellemc.os9                   1.0.4
dellemc.powerflex             1.5.0
dellemc.unity                 1.5.0
f5networks.f5_modules         1.22.1
fortinet.fortimanager         2.1.7
fortinet.fortios              2.2.2
frr.frr                       2.0.0
gluster.gluster               1.0.2
google.cloud                  1.1.2
grafana.grafana               1.1.1
hetzner.hcloud                1.10.0
hpe.nimble                    1.1.4
ibm.qradar                    2.1.0
ibm.spectrum_virtualize       1.11.0
infinidat.infinibox           1.3.12
infoblox.nios_modules         1.4.1
inspur.ispim                  1.3.0
inspur.sm                     2.3.0
junipernetworks.junos         4.1.0
kubernetes.core               2.4.0
lowlydba.sqlserver            1.3.1
mellanox.onyx                 1.0.0
netapp.aws                    21.7.0
netapp.azure                  21.10.0
netapp.cloudmanager           21.22.0
netapp.elementsw              21.7.0
netapp.ontap                  22.3.0
netapp.storagegrid            21.11.1
netapp.um_info                21.8.0
netapp_eseries.santricity     1.4.0
netbox.netbox                 3.11.0
ngine_io.cloudstack           2.3.0
ngine_io.exoscale             1.0.0
ngine_io.vultr                1.1.3
openstack.cloud               1.10.0
openvswitch.openvswitch       2.1.0
ovirt.ovirt                   2.4.1
purestorage.flasharray        1.17.0
purestorage.flashblade        1.10.0
purestorage.fusion            1.3.0
sensu.sensu_go                1.13.2
splunk.es                     2.1.0
t_systems_mms.icinga_director 1.32.0
theforeman.foreman            3.9.0
vmware.vmware_rest            2.2.0
vultr.cloud                   1.7.0
vyos.vyos                     4.0.0
wti.remote                    1.0.4

# /home/robert/.ansible/collections/ansible_collections
Collection                                Version
----------------------------------------- -------
ansible.posix                             1.4.0
community.general                         6.0.1
middleware_automation.amq                 1.1.1
middleware_automation.common              1.0.2
middleware_automation.redhat_csp_download 1.2.2
STEPS TO REPRODUCE

Run the code like this ansible-playbook -e "activemq_version=7.10.2" -e "activemq_sa_password=redhat" \ -i hostfiles/AMQdev.yml playbooks/mount_and_deploy.yml -vv

With the default playbook and this hostfile.

all:
  children:
    amq:
      children:
        ha1:
          hosts: amq1
          vars:
            artemis: "amq1"
            node0: "amq2"
        ha2:
          hosts: amq2
          vars:
            artemis: "amq2"
            node0: "amq1"
      vars:
        activemq_configure_firewalld: True
        activemq_prometheus_enabled: False
        activemq_cors_strict_checking: False
        activemq_ha_enabled: true
        activemq_shared_storage: true
        activemq_shared_storage_path: /data/amq-broker/shared
        ansible_user: ansible
        activemq_offline_install: True
        activemq_version: 7.10.2
        activemq_dest: /opt/amq
        activemq_archive: "amq-broker-{{ activemq_version }}-bin.zip"
        activemq_installdir: "{{ activemq_dest }}/amq-broker-{{ activemq_version }}"
        activemq_shared_storage_mounted: true
        activemq_port: 61616
        nfs_mount_source: "192.168.2.221:/"
        activemq_sa_password: "asb-sa-test-password"
        activemq_address_settings:
        - match: "#"
          parameters:
            dead_letter_address: DLQ
            expiry_address: ExpiryQueue
            redelivery_delay: 2000
            max_size_bytes: -1
            message_counter_history_day_limit: 10
            max_delivery_attempts: -1
            max_redelivery_delay: 300000
            redelivery_delay_multiplier: 2
            address_full_policy: PAGE
            auto_create_queues: true
            auto_create_addresses: true
            auto_create_jms_queues: true
            auto_create_jms_topics: true 
        activemq_users:
        - user: "{{ activemq_instance_username }}"
          password: "{{ activemq_instance_password }}"
          roles: [ amq ]
        - user: "asb-sa"
          password: "{{ activemq_sa_password }}"
          roles: [ amq ]
        activemq_roles:
        - name: amq
          match: '#'
          permissions: [ createDurableQueue, deleteDurableQueue, createAddress, deleteAddress, consume, browse, send, manage ]   
        activemq_acceptors:
          - name: amqp
            bind_address: "0.0.0.0"
            bind_port: "{{ activemq_port }}"
            parameters:
              tcpSendBufferSize: 1048576
              tcpReceiveBufferSize: 1048576
              protocols: CORE,AMQP,OPENWIRE
              useEpoll: true
              verifyHost: False
        activemq_connectors:
        - name: artemis
          address: "{{ artemis }}"
          port: "{{ activemq_port }}"
          parameters:
            tcpSendBufferSize: 1048576
            tcpReceiveBufferSize: 1048576
            protocols: CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE
            useEpoll: true
            amqpMinLargeMessageSize: 102400
            amqpCredits: 1000
            amqpLowCredits: 300
            amqpDuplicateDetection: true
            supportAdvisory: False
            suppressInternalManagementObjects: False

        - name: node0
          address: "{{ node0 }}"
          port: "{{ activemq_port }}"
          parameters:
            tcpSendBufferSize: 1048576
            tcpReceiveBufferSize: 1048576
            protocols: CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE
            useEpoll: true
            amqpMinLargeMessageSize: 102400
            amqpCredits: 1000
            amqpLowCredits: 300
            amqpDuplicateDetection: true
            supportAdvisory: False
            suppressInternalManagementObjects: False
EXPECTED RESULTS

I would like to have only node0 in the section in the broker.xml.

ACTUAL RESULTS

These two connectors appeared in the broker xml

    <cluster-connections>
      <cluster-connection name="my-cluster">
        <connector-ref>artemis</connector-ref>
        <message-load-balancing>ON_DEMAND</message-load-balancing>
        <max-hops>1</max-hops>
        <static-connectors>
          <connector-ref>node0</connector-ref>
          <connector-ref>node1</connector-ref>
        </static-connectors>
      </cluster-connection>
    </cluster-connections>
    <ha-policy>

This results in this error in the log

2023-03-24 08:42:42,787 INFO  [org.apache.activemq.artemis.core.server] AMQ221006: Waiting to obtain live lock
2023-03-24 08:42:43,047 INFO  [org.apache.activemq.artemis.core.server] AMQ221012: Using AIO Journal
2023-03-24 08:42:43,272 INFO  [org.apache.activemq.artemis.core.server] AMQ221057: Global Max Size is being adjusted to 1/2 of the JVM max size (-Xmx). being defined as 1,073,741,824
2023-03-24 08:42:43,293 WARN  [org.apache.activemq.artemis.core.server] AMQ222226: Connection configuration is null for connectorName node1
2023-03-24 08:42:43,481 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
2023-03-24 08:42:43,482 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
2023-03-24 08:42:43,483 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
2023-03-24 08:42:43,484 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
2023-03-24 08:42:43,485 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
2023-03-24 08:42:43,485 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
2023-03-24 08:42:43,947 WARN  [org.apache.activemq.artemis.core.server] AMQ222226: Connection configuration is null for connectorName node1
2023-03-24 08:42:43,952 ERROR [org.apache.activemq.artemis.core.server] AMQ224087: Error announcing backup: backupServerLocator is null. org.apache.activemq.artemis.core.server.cluster.BackupManager$BackupConnector$1@40583b01
2023-03-24 08:42:44,050 INFO  [org.apache.activemq.artemis.core.server] AMQ221034: Waiting indefinitely to obtain live lock
[ansible@amq1 log]$
guidograzioli commented 1 year ago

Hey, thanks for reporting; I need to check the output of artemis create because I might be mistaken, but it generates two static connections when passed two nodes. Can you please compare your config with this test which basically deploys the same configuration? https://github.com/ansible-middleware/amq/blob/main/molecule/static_cluster/converge.yml

The only difference is that it defines parameters for both connectors (being 'static' each with own name, not trying to use the same name for the connector as 'the other node')

RobertFloor commented 1 year ago

I attached the full log file here: amq-log.txt

The specific create part:

TASK [middleware_automation.amq.activemq : Create instance amq-broker of activemq] ***********************************************************************************************************************
task path: /home/robert/.ansible/collections/ansible_collections/middleware_automation/amq/roles/activemq/tasks/systemd.yml:47
[WARNING]: Module remote_tmp /opt/amq/amq-broker-7.10.2/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the
remote_tmp dir with the correct permissions manually
changed: [amq2] => {"changed": true, "cmd": ["/opt/amq/amq-broker-7.10.2/bin/artemis", "create", "/opt/amq/amq-broker", "--name", "amq-broker", "--clustered", "--cluster-user", "amq-cluster-user", "--cluster-password", "amq-cluster-pass", "--max-hops", "1", "--message-load-balancing", "ON_DEMAND", "--failover-on-shutdown", "--staticCluster", "tcp://10.0.2.15:61616,tcp://10.0.2.15:61616", "--require-login", "--user", "amq-broker", "--password", "amq-broker", "--host", "localhost", "--http-host", "0.0.0.0", "--no-autocreate", "--queues", "queue.in,queue.out", "--shared-store", "--data", "/data/amq-broker/shared"], "delta": "0:00:15.491361", "end": "2023-03-27 07:00:08.859388", "msg": "", "rc": 0, "start": "2023-03-27 06:59:53.368027", "stderr": "", "stderr_lines": [], "stdout": "Creating ActiveMQ Artemis instance at: /opt/amq/amq-broker\n\nAuto tuning journal ...\ndone! Your system can make 0.22 writes per millisecond, your journal-buffer-timeout will be 4480000\n\nYou can now start the broker by executing:  \n\n   \"/opt/amq/amq-broker/bin/artemis\" run\n\nOr you can run the broker in the background using:\n\n   \"/opt/amq/amq-broker/bin/artemis-service\" start", "stdout_lines": ["Creating ActiveMQ Artemis instance at: /opt/amq/amq-broker", "", "Auto tuning journal ...", "done! Your system can make 0.22 writes per millisecond, your journal-buffer-timeout will be 4480000", "", "You can now start the broker by executing:  ", "", "   \"/opt/amq/amq-broker/bin/artemis\" run", "", "Or you can run the broker in the background using:", "", "   \"/opt/amq/amq-broker/bin/artemis-service\" start"]}
changed: [amq1] => {"changed": true, "cmd": ["/opt/amq/amq-broker-7.10.2/bin/artemis", "create", "/opt/amq/amq-broker", "--name", "amq-broker", "--clustered", "--cluster-user", "amq-cluster-user", "--cluster-password", "amq-cluster-pass", "--max-hops", "1", "--message-load-balancing", "ON_DEMAND", "--failover-on-shutdown", "--staticCluster", "tcp://10.0.2.15:61616,tcp://10.0.2.15:61616", "--require-login", "--user", "amq-broker", "--password", "amq-broker", "--host", "localhost", "--http-host", "0.0.0.0", "--no-autocreate", "--queues", "queue.in,queue.out", "--shared-store", "--data", "/data/amq-broker/shared"], "delta": "0:00:15.542293", "end": "2023-03-27 07:00:08.909747", "msg": "", "rc": 0, "start": "2023-03-27 06:59:53.367454", "stderr": "", "stderr_lines": [], "stdout": "Creating ActiveMQ Artemis instance at: /opt/amq/amq-broker\n\nAuto tuning journal ...\ndone! Your system can make 0.17 writes per millisecond, your journal-buffer-timeout will be 5744000\n\nYou can now start the broker by executing:  \n\n   \"/opt/amq/amq-broker/bin/artemis\" run\n\nOr you can run the broker in the background using:\n\n   \"/opt/amq/amq-broker/bin/artemis-service\" start", "stdout_lines": ["Creating ActiveMQ Artemis instance at: /opt/amq/amq-broker", "", "Auto tuning journal ...", "done! Your system can make 0.17 writes per millisecond, your journal-buffer-timeout will be 5744000", "", "You can now start the broker by executing:  ", "", "   \"/opt/amq/amq-broker/bin/artemis\" run", "", "Or you can run the broker in the background using:", "", "   \"/opt/amq/amq-broker/bin/artemis-service\" start"]}

The problem seems to be this part:

"--staticCluster", "tcp://10.0.2.15:61616,tcp://10.0.2.15:61616"

guidograzioli commented 1 year ago
TASK [middleware_automation.amq.activemq : Create broker cluster node members] ***************************************************************************************************************************
task path: /home/robert/.ansible/collections/ansible_collections/middleware_automation/amq/roles/activemq/tasks/configure.yml:2
ok: [amq1] => (item=amq1) => {"ansible_facts": {"activemq_cluster_nodes": [{"address": "amq1", "inventory_host": "amq1", "name": "amq-broker", "value": "tcp://10.0.2.15:61616"}]}, "ansible_loop_var": "item", "changed": false, "item": "amq1"}
ok: [amq2] => (item=amq1) => {"ansible_facts": {"activemq_cluster_nodes": [{"address": "amq1", "inventory_host": "amq1", "name": "amq-broker", "value": "tcp://10.0.2.15:61616"}]}, "ansible_loop_var": "item", "changed": false, "item": "amq1"}
ok: [amq1] => (item=amq2) => {"ansible_facts": {"activemq_cluster_nodes": [{"address": "amq1", "inventory_host": "amq1", "name": "amq-broker", "value": "tcp://10.0.2.15:61616"}, {"address": "amq2", "inventory_host": "amq2", "name": "amq-broker", "value": "tcp://10.0.2.15:61616"}]}, "ansible_loop_var": "item", "changed": false, "item": "amq2"}
ok: [amq2] => (item=amq2) => {"ansible_facts": {"activemq_cluster_nodes": [{"address": "amq1", "inventory_host": "amq1", "name": "amq-broker", "value": "tcp://10.0.2.15:61616"}, {"address": "amq2", "inventory_host": "amq2", "name": "amq-broker", "value": "tcp://10.0.2.15:61616"}]}, "ansible_loop_var": "item", "changed": false, "item": "amq2"}

It seems from the logs that both nodes default IP address (as found by ansible ansible_default_ipv4.address fact) resolves to 10.0.2.15; do you have multiple nics and the nodes are supposed to communicate onthe non default one?

RobertFloor commented 1 year ago

At the moment we are testing it from a local machine (as a host) towards two VMs at the moment. These Vms run on another machine using on the same LAN using vagrant. We hardcoded the ips in our host files.

192.168.2.211 amq1
192.168.2.212 amq2
RobertFloor commented 1 year ago

You are right it is not the default gateway:

[ansible@amq1 ~]$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    100    0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     100    0        0 enp0s3
192.168.2.0     0.0.0.0         255.255.255.0   U     101    0        0 enp0s8
RobertFloor commented 1 year ago

So this works for us but I am not sure if it is applicable to all users. In many cloud environments the IP you would like to use is not on the default interface:

vars:
  iface: enp0s8

- name: Create broker cluster node members
  ansible.builtin.set_fact:
    activemq_cluster_nodes: >
      {{ activemq_cluster_nodes | default([]) + [
           {
             "name": activemq.instance_name,
             "address": item,
             "inventory_host": item,
             "value": "tcp://" + vars['ansible_'+iface]['ipv4']['address']  + ":" + (((activemq_port | int + activemq_ports_offset | int) | abs) | string)
           }
         ] }}
  loop: "{{ ansible_play_batch }}"
guidograzioli commented 1 year ago

That's true; I'll need to add a variable or two to make space for more customisation of the static address list

RobertFloor commented 1 year ago

Unfortunately this is not fixed for us, it creates two problems.

The most important problem is that we get

    <cluster-connections>
      <cluster-connection name="activemq">
        <connector-ref/>
        <message-load-balancing>ON_DEMAND</message-load-balancing>
        <max-hops>1</max-hops>
        <static-connectors>
          <connector-ref>xxxx</connector-ref>
          <connector-ref>yyyyl</connector-ref>
        </static-connectors>
      </cluster-connection>
    </cluster-connections>

This is invalid config and unables systemd to start.

 ./artemis run
           __  __  ____    ____            _
     /\   |  \/  |/ __ \  |  _ \          | |
    /  \  | \  / | |  | | | |_) |_ __ ___ | | _____ _ __
   / /\ \ | |\/| | |  | | |  _ <| '__/ _ \| |/ / _ \ '__|
  / ____ \| |  | | |__| | | |_) | | | (_) |   <  __/ |
 /_/    \_\_|  |_|\___\_\ |____/|_|  \___/|_|\_\___|_|

 Red Hat AMQ Broker 7.11.0.GA

java.lang.IllegalArgumentException: AMQ229038: connector-ref must neither be null nor empty
        at org.apache.activemq.artemis.core.config.impl.Validators$2.validate(Validators.java:55)
        at org.apache.activemq.artemis.utils.XMLConfigurationUtil.getString(XMLConfigurationUtil.java:64)
        at org.apache.activemq.artemis.core.deployers.impl.FileConfigurationParser.parseClusterConnectionConfiguration(FileConfigurationParser.java:2177)
        at org.apache.activemq.artemis.core.deployers.impl.FileConfigurationParser.parseMainConfig(FileConfigurationParser.java:623)
        at org.apache.activemq.artemis.core.config.impl.FileConfiguration.parse(FileConfiguration.java:56)
        at org.apache.activemq.artemis.core.config.FileDeploymentManager.readConfiguration(FileDeploymentManager.java:81)
        at org.apache.activemq.artemis.integration.FileBroker.createComponents(FileBroker.java:120)
        at org.apache.activemq.artemis.cli.commands.Run.execute(Run.java:119)
        at org.apache.activemq.artemis.cli.Artemis.internalExecute(Artemis.java:212)
        at org.apache.activemq.artemis.cli.Artemis.execute(Artemis.java:162)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.activemq.artemis.boot.Artemis.execute(Artemis.java:144)
        at org.apache.activemq.artemis.boot.Artemis.main(Artemis.java:61)
RobertFloor commented 1 year ago

If I read the documentation here: https://activemq.apache.org/components/artemis/documentation/1.0.0/clusters.html it should be te broker itself.

RobertFloor commented 1 year ago

Probably due to this code: https://github.com/ansible-middleware/amq/blob/2cab837e62b434f94e934778aa5abad82b8c07cc/roles/activemq/templates/cluster_connections.broker.xml.j2#L2

RobertFloor commented 1 year ago
all:
  children:
    amq:
      children:
        ha1:
          hosts: zzztyyy1001.onead.xxxuuuu.nl
          vars:
            artemis: "zzztyyy1001.onead.xxxuuuu.nl"
            node0:  "zzztyyy4001.onead.xxxuuuu.nl"
        ha2:
          hosts: "zzztyyy4001.onead.xxxuuuu.nl"
          vars:
            artemis: "zzztyyy4001.onead.xxxuuuu.nl"
            node0:  "zzztyyy1001.onead.xxxuuuu.nl"
      vars:
        activemq_cluster_iface: ens192
        ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
        activemq_local_archive_repository: /binaries/
        activemq_configure_firewalld: True
        activemq_prometheus_enabled: True
        amq_broker_enable: True
        activemq_cors_strict_checking: False
        activemq_disable_hornetq_protocol: true
        activemq_disable_mqtt_protocol: true
        activemq_ha_enabled: true
        activemq_shared_storage: true
        activemq_shared_storage_path: /mnt/yyyf-tst-internal
        activemq_journal_type: NIO
        activemq_nio_enabled: true
        ansible_user: cicd
        activemq_offline_install: True
        activemq_java_home: /etc/alternatives/jre_11_openjdk
        activemq_version: 7.11.0
        activemq_dest: /opt/amq
        activemq_archive: "amq-broker-{{ activemq_version }}-bin.zip"
        activemq_installdir: "{{ activemq_dest }}/amq-broker-{{ activemq_version }}"
        activemq_shared_storage_mounted: true
        activemq_port: 61616
        activemq_instance_username: amq-admin
        activemq_enable_audit: true
        activemq_hawtio_role: "amq,g-ap-com-amq-acc-internal"
        activemq_addresses:
          - name: external_queue
            anycast:
              - name: external_queue
        activemq_address_settings:
        - match: "#"
          parameters:
            dead_letter_address: DLQ
            expiry_address: ExpiryQueue
            redelivery_delay: 2000
            max_size_bytes: 104857600
            message_counter_history_day_limit: 10
            max_delivery_attempts: -1
            max_redelivery_delay: 300000
            redelivery_delay_multiplier: 2
            address_full_policy: PAGE
            auto_create_queues: true
            auto_create_addresses: true
            auto_create_jms_queues: true
            auto_create_jms_topics: true
        activemq_users:
        - user: "{{ activemq_instance_username }}"
          password: "{{ activemq_instance_password }}"
          roles: [ amq ]
        - user: "yyy-application-sa"
          password: "{{ activemq_sa_password }}"
          roles: [ all_permissions_non_precreated_queues , external_customer_role ]
        - user: "yyy-testers-sa"
          password: "{{ activemq_testers_password }}"
          roles: [ all_permissions_non_precreated_queues ]
        - user: "external_customer"
          password: "external"
          roles: [ external_customer_role ]
        activemq_roles:
        - name: amq
          match: '#'
          permissions: [ createDurableQueue, deleteDurableQueue, createNonDurableQueue, deleteNonDurableQueue, createAddress, deleteAddress, consume, browse, send, manage ]
        - name: g-ap-com-amq-acc-internal
          match: '#'
          permissions: [ createDurableQueue, deleteDurableQueue, createNonDurableQueue, deleteNonDurableQueue, createAddress, deleteAddress, consume, browse, send, manage ]
        - name: all_permissions_non_precreated_queues
          match: '#'
          permissions: [ createDurableQueue, deleteDurableQueue, createNonDurableQueue, deleteNonDurableQueue, createAddress, deleteAddress, consume, browse, send, manage ]
        - name: external_customer_role
          match: 'external_queue'
          permissions: [ consume, browse, send ]
        activemq_acceptors:
          - name: amqp
            bind_address: "0.0.0.0"
            bind_port: "{{ activemq_port }}"
            parameters:
              tcpSendBufferSize: 1048576
              tcpReceiveBufferSize: 1048576
              protocols: CORE,AMQP,OPENWIRE
              useEpoll: true
              verifyHost: False
        activemq_connectors:
        - name: zzztyyy1001.onead.xxxuuuu.nl
          address: zzztyyy1001.onead.xxxuuuu.nl
          port: "{{ activemq_port }}"
          parameters:
            tcpSendBufferSize: 1048576
            tcpReceiveBufferSize: 1048576
            protocols: CORE,AMQP,OPENWIRE
            useEpoll: true
            amqpMinLargeMessageSize: 102400
            amqpCredits: 1000
            amqpLowCredits: 300
            amqpDuplicateDetection: true
            supportAdvisory: False
            suppressInternalManagementObjects: False
        - name: zzztyyy4001.onead.xxxuuuu.nl
          address: zzztyyy4001.onead.xxxuuuu.nl
          port: "{{ activemq_port }}"
          parameters:
            tcpSendBufferSize: 1048576
            tcpReceiveBufferSize: 1048576
            protocols: CORE,AMQP,OPENWIRE
            useEpoll: true
            amqpMinLargeMessageSize: 102400
            amqpCredits: 1000
            amqpLowCredits: 300
            amqpDuplicateDetection: true
            supportAdvisory: False
            suppressInternalManagementObjects: False

This is our settings file it was working on Galaxy version 1.3.7 but not on 1.3.9

guidograzioli commented 1 year ago

Hello; when you define:

        activemq_connectors:
        - name: zzztyyy1001.onead.xxxuuuu.nl
          address: zzztyyy1001.onead.xxxuuuu.nl
          port: "{{ activemq_port }}"
          parameters:
            [..]
        - name: zzztyyy4001.onead.xxxuuuu.nl
          address: zzztyyy4001.onead.xxxuuuu.nl
          port: "{{ activemq_port }}"
          parameters:
            [..]

you are "removing" the default connector defined here: https://github.com/ansible-middleware/amq/blob/main/roles/activemq/defaults/main.yml#L240C1-L244C32

You should add the same item to your activemq_connectors variable, or define your connectors in an alternate variable and redefine as:

additional_connectors:
        - name: zzztyyy1001.onead.xxxuuuu.nl
          address: zzztyyy1001.onead.xxxuuuu.nl
          port: "{{ activemq_port }}"
          parameters:
            [..]
        - name: zzztyyy4001.onead.xxxuuuu.nl
          address: zzztyyy4001.onead.xxxuuuu.nl
          port: "{{ activemq_port }}"
          parameters:
            [..]

activemq_connectors: "{{ activemq_connectors + additional_connectors }}"

Alternatively, since your connector parameters are all defaults, you can try removing the variable override completely, and see if the connectors are automatically generated correctly.

RobertFloor commented 1 year ago

But do you always need the localhost connector to be the connector-ref in AMQ? or can you select one of the other connectors? If I read this part of the documentation:

connector-ref. This specifies the connector and optional backup connector that will be broadcasted (see [Configuring the Transport](https://activemq.apache.org/components/artemis/documentation/1.0.0/configuring-transports.html) for more information on connectors). The connector to be broadcasted is specified by the connector-name attribute.

It suggests that this name should be recognized in the network and not be localhost?

guidograzioli commented 1 year ago

But do you always need the localhost connector to be the connector-ref in AMQ? or can you select one of the other connectors? If I read this part of the documentation:

connector-ref. This specifies the connector and optional backup connector that will be broadcasted (see [Configuring the Transport](https://activemq.apache.org/components/artemis/documentation/1.0.0/configuring-transports.html) for more information on connectors). The connector to be broadcasted is specified by the connector-name attribute.

It suggests that this name should be recognized in the network and not be localhost?

My understanding is that the definition of connector-ref is the one you posted in the context of a broadcast-group; while that is not (yet!) supported in the collection, we have a static_connectors list of connector-ref instead, which completely defines the topology. In our case I think the connector-ref could even be implied, but both the XSD schema defines it mandatory and artemis create output has it, so we decided to keep it in the collection generated broker.xml.

RobertFloor commented 1 year ago

Hi thanks for the response About this remark:

Alternatively, since your connector parameters are all defaults, you can try removing the variable override completely, and see if the connectors are automatically generated correctly.

We would need to use ssl for the connectors so we need some config for them at least. How would you do this?

RobertFloor commented 1 year ago

When I try this solution:

msg: 'An unhandled exception occurred while templating ''{{ activemq_connectors + additional_connectors }}''. Error was a <class ''ansible.errors.AnsibleError''>, original message: An unhandled exception occurred while templating ''{{ activemq_connectors + additional_connectors }}''

guidograzioli commented 1 year ago

Hi thanks for the response About this remark:

Alternatively, since your connector parameters are all defaults, you can try removing the variable override completely, and see if the connectors are automatically generated correctly.

We would need to use ssl for the connectors so we need some config for them at least. How would you do this?

They would be merged by name, check the static_cluster molecule example here: https://github.com/ansible-middleware/amq/blob/main/molecule/static_cluster/converge.yml#L44

When I try this solution:

activemq_connectors: "{{ activemq_connectors + additional_connectors }}"
fatal: [host1001.cloud.nl]: FAILED! => 

Well it was a pseudo-code suggestion; you'd have to run a set_fact before the main roles startsin pre_tasks or something similar