Open gdsmith opened 5 years ago
Here's what I ended up needing to do as a workaround
data "template_file" "mq-cli" {
count = var.broker_count
template = <<EOF
aws mq update-broker \
--broker-id ${element(aws_mq_broker.amq-cluster-broker.*.id, count.index)} \
--configuration Id=${element(aws_mq_configuration.amq-cluster-config.*.id, count.index)},Revision=${element
(aws_mq_configuration.amq-cluster-config.*.latest_revision, count.index) ~}
EOF
}
resource "null_resource" "associate-configuration" {
triggers = {
cluster_config_ids = join(
",",
aws_mq_configuration.amq-cluster-config.*.latest_revision,
)
}
provisioner "local-exec" {
command = join(" ; ", data.template_file.mq-cli.*.rendered)
}
}
I've found a couple of workarounds for this I felt I'd share.
First is to make up a DNS CNAME to the brokers, with a map of broker names/indices to those brokers, then the config can just code the DNS alias rather than the actual broker, something like:
<networkConnector name="connector_mq2_to_mq1" uri="static:(ssl://mq1.somedomain:61617?socket.verifyHostName=false)" userName="cluster"/>
Note you'll need that verifyHostName off otherwise the mismatched cert will cause trouble.
If you don't mind a two-apply setup, you can bounce the replication endpoints off SSM like this:
resource "aws_mq_broker" "mq" {
count = 2
broker_name = "mq${each.key}"
...
configuration {
# only bind this on second apply
id = aws_ssm_parameter.mq-replication-endpoint[each.key].value != "not-set" ? aws_mq_configuration.mq[each.key].id : null
revision = aws_ssm_parameter.mq-replication-endpoint[each.key].value != "not-set" ? aws_mq_configuration.mq[each.key].latest_revision : null
}
}
resource "aws_mq_configuraiton" "mq" {
count = 2
...
data = templatefile("someconfig.xml.tpl", {
broker_endpoints = { for k, r in aws_ssm_parameter.mq-replication-endpoint: k => r.value }
}
}
resource "aws_ssm_parameter" "mq-replication-endpoint" {
count = 2
name = "/mq/mq${each.key}/replication_endpoint"
value = "not-set"
type = "String"
lifecycle {
ignore_changes = [value]
}
}
# references the same SSM param as above, but executes after the broker has been created
resource "aws_ssm_parameter" "mq-replication-endpoint-setter" {
for_each = aws_mq_broker.mq
name = "/mq/${each.value.broker_name}/replication_endpoint"
value = each.value.instances.0.endpoints.0
type = "String"
overwrite = true
}
This is super ugly, and against the way Terraform likes to be deterministic, but solves the problem in a similar way to how Cloudformation does, although requires two applies per broker added.
Just thought I'd leave the workaround here in case it helps anyone else who doesn't want to shell out.
It appears that the aws go sdk doesn't have support for a configuration association, it looks like it's cloudformation only which sucks. Seeing as how this issue was created 2 years ago, something tells me this isn't going to be implemented soon. In fact, it doesn't look like it's available in the api at all which means that the sdk can't even support it.
I think it could be synthetically supported by creating a broker with no configuration specified, then making TF enforce a singleton config association that updates the broker to add the config pointer. It's possible to have no config specified for the broker and it'll just use default config.
Essentially that's what's happening with CloudFormation anyway - the broker can't come up without config, so all it's doing is bringing it up without config, then internally adding the config and rebooting the broker again.
I don't know enough about TF's internals to know if it can understand that an underlying AWS resource is composed of two TF resources that need to be melded together when diffing, but I feel this is sort of similar behaviour to how security group rules work...
to set up multiple instances in a mesh configuration you need to reference the other instances using connection information in the config.
For us the most idiomatic way to work around this limitation was to invoke CF AWS::AmazonMQ::Configuration
and AWS::AmazonMQ::ConfigurationAssociation
templates from Terraform so they are associated with each node, after they are available. For a 3-node HA mesh:
resource "aws_mq_broker" "signal_emitter_broker_mesh" {
for_each = {
"broker1" : var.backend_subnet_a.id
"broker2" : var.backend_subnet_b.id
"broker3" : var.backend_subnet_c.id
}
broker_name = "signal_emitter_mesh-${each.key}"
publicly_accessible = false
engine_type = "ActiveMQ"
engine_version = "5.17.6"
deployment_mode = "SINGLE_INSTANCE"
host_instance_type = "mq.t3.micro"
apply_immediately = true
logs {
general = true
audit = false
}
user {
username = module.common_env.secrets["aws_mq_broker.username_console"]
password = module.common_env.secrets["aws_mq_broker.password_console"]
console_access = true
}
user {
username = module.common_env.secrets["aws_mq_broker.username_replication"]
password = module.common_env.secrets["aws_mq_broker.password_replication"]
console_access = false
}
subnet_ids = [each.value]
security_groups = [aws_security_group.messaging_broker_mesh.id]
}
// AWS can return nodes in any ordering
locals {
mq_mesh_replication_topology = {
"broker1" : toset(["broker2", "broker3"]),
"broker2" : toset(["broker1", "broker3"]),
"broker3" : toset(["broker1", "broker2"]),
}
mq_mesh_replication_endpoints = {
for node in aws_mq_broker.signal_emitter_broker_mesh :
split("-", node.broker_name)[1] => {
id : node.id,
endpoint : node.instances.0.endpoints.0
}
}
}
resource "aws_cloudformation_stack" "node_config" {
depends_on = [aws_mq_broker.signal_emitter_broker_mesh]
for_each = {
for broker_name, neighbors in local.mq_mesh_replication_topology :
broker_name => [
for n in neighbors :
local.mq_mesh_replication_endpoints[n].endpoint
]
}
name = "mesh-configuration-stack-${each.key}"
parameters = {
EngineVersion = "5.17.6"
ReplicationUser = module.common_env.secrets.secrets["aws_mq_broker.username_replication"]
NeighborNodes = join(",", each.value)
BrokerId = local.mq_mesh_replication_endpoints[each.key].id
ConfigName = "ha-mesh-${each.key}"
}
template_body = file("${path.module}/node_config-cf.yaml")
}
There's another gotcha here: mesh nodes are created asynchronously, and their state info is returned in the same way. So we must make sure the replication topology is ordered (local.mq_mesh_replication_topology
).
The CF template:
---
Description: "Create an Amazon MQ for an ActiveMQ HA mesh configuration"
Parameters:
EngineVersion:
Type: String
MaxLength: 12
ReplicationUser:
Type: String
MaxLength: 32
NeighborNodes:
Type: CommaDelimitedList
BrokerId:
Type: String
MaxLength: 255
ConfigName:
Type: String
MaxLength: 32
Resources:
BrokerConfig:
Type: "AWS::AmazonMQ::Configuration"
Properties:
Data:
"Fn::Base64": !Sub
- |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<broker schedulePeriodForDestinationPurge="10000" xmlns="http://activemq.apache.org/schema/core">
<destinationInterceptors>
</destinationInterceptors>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry gcInactiveDestinations="true" inactiveTimoutBeforeGC="600000" topic=">">
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry gcInactiveDestinations="true" inactiveTimoutBeforeGC="600000" queue=">"/>
</policyEntries>
</policyMap>
</destinationPolicy>
<plugins>
</plugins>
<networkConnectors>
<networkConnector conduitSubscriptions="false" consumerTTL="1" messageTTL="-1" name="QueueConnectorConnectingToNeighbor1" uri="static:(${Neighbor1})" userName="${ReplicationUser}">
<excludedDestinations>
<topic physicalName=">"/>
</excludedDestinations>
</networkConnector>
<networkConnector conduitSubscriptions="true" consumerTTL="1" messageTTL="-1" name="TopicConnectorConnectingToNeighbor1" uri="static:(${Neighbor1})" userName="${ReplicationUser}">
<excludedDestinations>
<queue physicalName=">"/>
</excludedDestinations>
</networkConnector>
<networkConnector conduitSubscriptions="false" consumerTTL="1" messageTTL="-1" name="QueueConnectorConnectingToNeighbor2" uri="static:(${Neighbor2})" userName="${ReplicationUser}">
<excludedDestinations>
<topic physicalName=">"/>
</excludedDestinations>
</networkConnector>
<networkConnector conduitSubscriptions="true" consumerTTL="1" messageTTL="-1" name="TopicConnectorConnectingToNeighbor2" uri="static:(${Neighbor2})" userName="${ReplicationUser}">
<excludedDestinations>
<queue physicalName=">"/>
</excludedDestinations>
</networkConnector>
</networkConnectors>
</broker>
- ReplicationUser:
Ref: ReplicationUser
Neighbor1: !Select [0, !Ref NeighborNodes]
Neighbor2: !Select [1, !Ref NeighborNodes]
EngineType: ACTIVEMQ
EngineVersion: { Ref: EngineVersion }
Name: { Ref: ConfigName }
MeshNodeConfigurationAssociation:
Type: AWS::AmazonMQ::ConfigurationAssociation
Properties:
Broker: { Ref: BrokerId }
Configuration:
Id: { Ref: BrokerConfig }
Revision: { "Fn::GetAtt": [BrokerConfig, Revision] }
Perhaps there's a simpler way to achieve this? Anyway, hope this helps.
Community Note
Description
It seems impossible to create any AMQ configuration that requires broker instance information in the configuration. For example to set up multiple instances in a mesh configuration you need to reference the other instances using connection information in the config.
The only place to set the configuration is in the broker configuration, and it's not possible to reference it until after it's been created, so currently it would lead to a circular dependency.
The resolution seems to be solved in CloudFormation by using an
AWS::AmazonMQ::ConfigurationAssociation
entry. Which associates a config to a broker thus allowing you to do:1) create broker(s) 2) create config(s) 3) apply config(s) to broker(s)
Which breaks the circular dependency. The only other workaround I can think of is to either
1) manually run terraform in stages altering as you go 2) break out the aws cli to update the config after creation
New or Affected Resource(s)
Potential Terraform Configuration
example amq config:
where the primary and backup instance-id's would come from the output of aws_mq_broker config
References