Closed ranjitk-burwood closed 1 year ago
Hi @ranjitk-burwood thanks for opening an issue. I just tried to reproduce this without success using the following config:
locals {
namespaces = {
"1.2.3.4" = "blue"
"4.5.6.7" = "green"
}
}
resource helm_release test {
for_each = local.namespaces
namespace = each.value
name = "redis"
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
}
Output shows I get two releases in two namespaces:
$ terraform apply --auto-approve
helm_release.test["4.5.6.7"]: Creating...
helm_release.test["1.2.3.4"]: Creating...
helm_release.test["4.5.6.7"]: Creation complete after 1m8s [id=redis]
helm_release.test["1.2.3.4"]: Creation complete after 1m6s [id=redis]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
$ helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
redis blue 1 2021-05-05 01:36:33.636342 -0400 EDT deployed redis-14.1.1 6.2.3
redis green 1 2021-05-05 01:36:30.694217 -0400 EDT deployed redis-14.1.1 6.2.3
Can you share a debug log with TF_LOG=DEBUG and HELM_DEBUG=1? There could be an error that is being swallowed.
Are there any resources in your chart that are not namespaced? I'm able to produce a failure with a chart that has a ClusterRole in it, for example.
Hi @jrhouston sorry for the delayed response. I figured out that this was due to values that I set in a configuration YAML being passed into the chart.
Specifically for the JupyterHub chart it was setting the two values below to false
which helped fix my issue.
If anybody comes across this and does need to enable both of those fields my workaround was to just have two terraform apply
steps run in succession for the Helm chart build step so that I could make sure all namespaces were setup properly.
scheduling:
userScheduler:
enabled: false
podPriority:
enabled: false
If you want me to run a debug log let me know, I was trying this out through Cloud Build and wasn't sure of how to get that configured but could try to just run this through Cloud Shell instead.
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Terraform, Provider, Kubernetes and Helm Versions
Affected Resource(s)
Terraform Configuration Files Root Module
Terraform Configuration Files Child Module
Steps to Reproduce
terraform init
,terraform plan
,terraform apply
.Expected Behavior
I am expecting to see the same JupyterHub helm chart provisioned into two GKE namespaces when my Cloud Build pipeline runs once.
Actual Behavior
I only see one namespace is provisioned properly. I need to re-run the Cloud Build pipeline in order for the second namespace to have Helm installed. My Cloud Build run does not fail or error out.
Important Factoids
I am using a for_each loop to read a map from my state file inside of the child module. This map contains a syntax of
{IP Address : Namespace}
.Community Note