Open drmanderson opened 5 years ago
Hi @drmanderson, MSK requires at least 1 subnet in 3 different AZ's. So your all 3 subnets must be in different AZ's.
Hi @pasali , Thanks for replying - I can confirm that all three subnets chosen are in different AZ's.
I have the same issue when trying to run a single broker in a single subnet. az: eu-west-1a
plan part:
+ aws_msk_cluster.cluster
id: <computed>
arn: <computed>
bootstrap_brokers: <computed>
bootstrap_brokers_tls: <computed>
broker_node_group_info.#: "1"
broker_node_group_info.0.az_distribution: "DEFAULT"
broker_node_group_info.0.client_subnets.#: "1"
broker_node_group_info.0.client_subnets.0: "subnet-id"
broker_node_group_info.0.ebs_volume_size: "1000"
broker_node_group_info.0.instance_type: "kafka.m5.large"
broker_node_group_info.0.security_groups.#: "1"
broker_node_group_info.0.security_groups.0: "sg-id"
cluster_name: "cluster-test"
current_version: <computed>
encryption_info.#: "1"
encryption_info.0.encryption_at_rest_kms_key_arn: "arn:aws:kms:eu-west-1:account-id:key/key-id"
enhanced_monitoring: "DEFAULT"
kafka_version: "2.2.1"
number_of_broker_nodes: "1"
I obfuscated some sensitive data.
issue disappears after changing number of brokers from 1 to 3.
Same issue for me when attempting to create a single node cluster
# aws_msk_cluster.cicerone will be created
+ resource "aws_msk_cluster" "example" {
+ arn = (known after apply)
+ bootstrap_brokers = (known after apply)
+ bootstrap_brokers_tls = (known after apply)
+ cluster_name = "example"
+ current_version = (known after apply)
+ enhanced_monitoring = "DEFAULT"
+ id = (known after apply)
+ kafka_version = "1.1.1"
+ number_of_broker_nodes = 1
+ zookeeper_connect_string = (known after apply)
+ broker_node_group_info {
+ az_distribution = "DEFAULT"
+ client_subnets = [
+ "subnet-<redacted>",
]
+ ebs_volume_size = 10
+ instance_type = "kafka.m5.large"
+ security_groups = (known after apply)
}
+ encryption_info {
+ encryption_at_rest_kms_key_arn = (known after apply)
+ encryption_in_transit {
+ client_broker = "TLS_PLAINTEXT"
+ in_cluster = true
}
}
}
seems to work if i change number_of_broker_nodes
to 3 and add two more subnets in different AZs to the subnet list.
# aws_msk_cluster.cicerone will be created
+ resource "aws_msk_cluster" "example" {
+ arn = (known after apply)
+ bootstrap_brokers = (known after apply)
+ bootstrap_brokers_tls = (known after apply)
+ cluster_name = "example"
+ current_version = (known after apply)
+ enhanced_monitoring = "DEFAULT"
+ id = (known after apply)
+ kafka_version = "1.1.1"
+ number_of_broker_nodes = 3
+ zookeeper_connect_string = (known after apply)
+ broker_node_group_info {
+ az_distribution = "DEFAULT"
+ client_subnets = [
+ "subnet-<redacted>",
+ "subnet-<redacted>",
+ "subnet-<redacted>"
]
+ ebs_volume_size = 10
+ instance_type = "kafka.m5.large"
+ security_groups = (known after apply)
}
+ encryption_info {
+ encryption_at_rest_kms_key_arn = (known after apply)
+ encryption_in_transit {
+ client_broker = "TLS_PLAINTEXT"
+ in_cluster = true
}
}
}
I suspect this is an issue with AWS MSK and not Terraform. There doesn't seem to be a way of creating a single node Kafka cluster in AWS.
Following this article https://docs.aws.amazon.com/msk/latest/developerguide/msk-create-cluster.html#create-cluster-cli but changing the value of number-of-broker-nodes
to 1
and leave 3 subnets in the brokernodegroupinfo.json
file, you get the following error:
$aws kafka create-cluster --cluster-name "Test" --broker-node-group-info file://brokernodegroupinfo.json --kafka-version "2.2.1" --number-of-broker-nodes 1 --enhanced-monitoring "DEFAULT"
An error occurred (BadRequestException) when calling the CreateCluster operation: The number of broker nodes must be a multiple of Availability Zones in the BrokerAZDistribution parameter.
and if you reduce the number of subnets in the brokernodegroupinfo.json
file to 1, you get the following error:
$aws kafka create-cluster --cluster-name "Test" --broker-node-group-info file://brokernodegroupinfo.json --kafka-version "2.2.1" --number-of-broker-nodes 1 --enhanced-monitoring "DEFAULT"
An error occurred (BadRequestException) when calling the CreateCluster operation: The number of Availability Zones in the BrokerAZDistribution parameter must be equal to the number of client subnets.
I contacted the aws support. At this time you can only create a cluster with a number of brokers which is a multiple of the number of AZs available in the selected region. Knowing that in addition MSK is currently not available in regions with only 2 AZs.
So at this time you can at best create a 3 nodes cluster with 1 node par AZ.
🎉 in eu-west-3 you can now implement a 2 nodes cluster whereas the region as 3 AZ
Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.
If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!
Hey y'all :wave: Thank you for taking the time to file this issue and for the additional discussion! Given that there's been a number of AWS provider releases since this was initially filed, can anyone confirm whether you're still experiencing this behavior?
Yes I still had the problem with the newest version
Has anyone figured out why this errors out with the same issue even if the number of nodes is set to 3 and the subnets being passed are from 3 different AZs within the same region. Here's a sample of my TF definition
resource "aws_msk_cluster" "msk_cluster" {
cluster_name = var.cluster_name
kafka_version = "3.3.2"
number_of_broker_nodes = var.number_of_broker_nodes
client_authentication {
sasl {
scram = true
iam = true
}
}
# NOTE: MSK (by default) does not allow public access
broker_node_group_info {
instance_type = var.instance_type
client_subnets = var.cluster_subnets
security_groups = var.security_groups
storage_info {
ebs_storage_info {
volume_size = var.volume_size
}
}
}
}
I pass in the configs like so
module "msk_cluster" {
source = "./modules/msk"
enviroment = local.enviroment
cluster_name = "CDLKafkaCluster"
cluster_subnets = concat(keys(module.cdl_vpc.vpc_public_subnets), keys(module.cdl_vpc.vpc_private_subnets))
security_groups = [module.msk_private_security_group.security_group_id]
instance_type = "kafka.t3.small"
volume_size = 100
number_of_broker_nodes = 3
kafka_sasl_username = var.KAFKA_SASL_USERNAME
kafka_sasl_password = var.KAFKA_SASL_PASSWORD
}
And the subnets (public and private) are derived from
resource "aws_subnet" "public_subnet" {
for_each = var.public_subnet_numbers
vpc_id = aws_vpc.vpc.id
availability_zone = each.key
cidr_block = cidrsubnet(aws_vpc.vpc.cidr_block, 4, each.value)
tags = {
Name = var.vpc_name
Project = var.project_name
Role = "public"
Environment = var.environment
ManagedBy = "terraform"
Subnet = "${each.key}-${each.value}"
}
}
Getting this BadRequestException: Specify either two or three client subnets.
What if I have more that 3 subnets? could that be an issue?
From what I remember, we can assign one subnet per broker node. Should you have more than 3 subnets across AZs, that should be alright as long as there are the same number of nodes. If you have more than 3 subnets (within the same AZ) that you want to assign to the broker, then you will have to create that many number of nodes in each AZ (Ex: 3 nodes per AZ = 1 subnet per node additional to the nodes created in different AZ)
Hi,
I'm trying to deploy a simple 3 node MSK cluster and I'm getting the following error message:
I've had a search and found https://github.com/terraform-providers/terraform-provider-aws/issues/8793.
Having read that ticket my code doesn't match any of the examples that were causing issues.
Here is my main.tf
Terraform version information
I have confirmed that the VPC only has 3 AZs use2-az1, use-az2, use2-az3. So there is a match between the number of subnets provided and the number of AZs. While I have not shown the subnet id's above I have confirmed that the subnet ids used are correct for the VPC and are not duplicated in the code.
I have also tried adding
to the broker_node_group_info section - to no avail.
I have also tried various versions of the provider and previous versions of terraform.
Any help would be much appreciated.
Thank you.
Community Note