Closed dusanu87 closed 1 year ago
@angelabad can we merge and release this?
Hi @dusansusic, sorry for my late replay, I was very busy. Ill review your code.
Thanks for your work!
Hi @dusansusic , I tested your patch, it works fine with old provisioned clusters, but if you provision new cluster, your patch fails like my code with old clusters. Could you confirm this issue?
Could you help me to test and solve this?
Thanks in advance!
I ran into the same issue as @dusanu87. I created a local module to 'fix' my current issue and will be watching here for updates.
I've made some tests and this PR seems to be ok.
test 1:
I've created a whole new cluster using this PR.
Creation was ok. After, I tried applying a tag change and TF didn't ask provisioned_throughput
change.
test 2:
Created a whole new cluster without this PR (used release 0.5.1).
Creation was ok. After, I tried applying a tag change and TF didn't ask provisioned_throughput
change.
So, for my use case, this PR could be approved.
Note: This PR is needed for me once the cluster that was throwing provisioned_throughput
was created in 2021, and for some reason (update on this module or on AWS), TR started asking for change in provisioned_throughput
.
module "aws_msk_core_cluster_dv_test" {
# source = "./modules/terraform-aws-msk-cluster/"
source = "angelabad/msk-cluster/aws"
version = "~> 0.5.1"
cluster_name = "devqa-core-dv-test"
instance_type = "kafka.t3.small"
number_of_nodes = 3
client_subnets = module.aws_vpc.private_subnets
kafka_version = "2.8.0"
volume_size = 100
...
}
We ran into the same issue; it was showing drift for existing cluster and but apply was working fine.
But now; after upgrading aws and terraform versions; the terraform apply is simple failing with below error
terraform: 1.5.6 hasicorp/aws: 5.14.0
As per @dverzolla comments; it seems to work. @angelabad Can you review the PR?
│ Error: updating MSK Cluster (arn:aws:kafka:<aws_region_name>:<aws_account_id>:cluster/<msk_cluster_name>) broker storage: BadRequestException: The request does not include any updates to the EBS volumes of the cluster. Verify the request, then try again.
│ {
│ RespMetadata: {
│ StatusCode: 400,
│ RequestID: "2a2f003e-c92e-4a6a-aa40-1084c11c69b4"
│ },
│ Message_: "The request does not include any updates to the EBS volumes of the cluster. Verify the request, then try again."
│ }
│
│ with module.msk-cluster.module.msk-cluster.aws_msk_cluster.this,
│ on .terraform/modules/msk-cluster.msk-cluster/main.tf line 108, in resource "aws_msk_cluster" "this":
│ 108: resource "aws_msk_cluster" "this" {
Thanks for your feedback, I'm reviewing it again.
Merged, thanks for your help.
Change released on v0.5.2
@angelabad @dverzolla
We are now facing the issue with the new created Clusters with version 0.5.0;
The drift was removed from older clusters after we upgraded the module version to 0.5.2
Environment: MSK cluster module: v0.5.0; aws - 4.57.0
After upgrading; module and aws provider version on new created clusters; the plan shows the drift as below
terraform plan
# module.msk_cluster_prod.module.msk-cluster.aws_msk_cluster.this will be updated in-place
~ resource "aws_msk_cluster" "this" {
id = "arn:aws:kafka:xxxxxxx:xxxxxxxx:cluster/msk-prod-/bxxxxxxxxxxxxxxxxxxxxxx"
tags = {
"Purpose" = "MSK cluster"
}
# (12 unchanged attributes hidden)
~ broker_node_group_info {
# (4 unchanged attributes hidden)
~ storage_info {
~ ebs_storage_info {
# (1 unchanged attribute hidden)
- provisioned_throughput {
- enabled = false -> null
- volume_throughput = 0 -> null
}
}
}
# (1 unchanged block hidden)
}
# (4 unchanged blocks hidden)
}
Error
╷
│ Error: updating MSK Cluster (arn:aws:kafka:xxxxxxx:xxxxxxxx:cluster/msk-prod-/bxxxxxxxxxxxxxxxxxxxxxx) broker storage: BadRequestException: The request does not include any updates to the EBS volumes of the cluster. Verify the request, then try again.
│ {
│ RespMetadata: {
│ StatusCode: 400,
│ RequestID: "68aefe34-97b4-4c98-89dd-99e13257704d"
│ },
│ Message_: "The request does not include any updates to the EBS volumes of the cluster. Verify the request, then try again."
│ }
│
│ with module.msk_cluster_prod.module.msk-cluster.aws_msk_cluster.this,
│ on .terraform/modules/msk_cluster_prod.msk-cluster/main.tf line 100, in resource "aws_msk_cluster" "this":
│ 100: resource "aws_msk_cluster" "this" {
│
╵
Try upgrading to v0.5.2.
@dverzolla
Apologies; I believe I was not clear in my message earlier.
We have already upgraded the module version to 0.5.2
Initial Environment: MSK cluster module: v0.5.0; aws - 4.57.0
Current Environment (after upgrade): MSK cluster module: v0.5.2; aws - 5.14.0
The above error is blocking us to do any update in security of MSK cluster using terraform.
I've upgraded to 0.5.2
and did some tests with both clusters, one created in 2021, and another created earlier this year. Both worked properly without any drift.
In my case aws version was v5.6.2
.
This PR is making provisioned_throughput block optional in case variable
provisioned_volume_throughput
is not specified. We run into the issue where module is constantly reporting this changeprovisioned_throughput { enabled = false }
in terraform state since we did not specifyprovisioned_volume_throughput
variable.If variable is not specified, provisioned_throughput block should not be initialized.
Do yo think this PR could be merged? Please feel free to add any comments.
Thanks in advance!