Open FischlerA opened 3 years ago
Thanks for opening @FischlerA. Did you try using the wait
attribute? By default helm will not wait for all pods to become ready, just create the API resources.
Thanks for opening @FischlerA. Did you try using the
wait
attribute? By default helm will not wait for all pods to become ready, just create the API resources.
Per documentation the wait attributed defaults to true. But even after explicitly setting it to true the behavior didn't change and it was still seen as success with a crashing pod.
Ah yep, you're right – I will try and reproduce this.
The provider itself doesn't do the waiting, it just passes along the wait flag to the install action in the helm package. Do you get the same issue if you do a helm install --wait
with your chart?
@jrhouston
We deployed the chart again by using helm directly with helm install --wait
and the behaviour was as expected:
After waiting for five minutes, we've got an error-message Error: timed out waiting for the condition.
I had same experiences when I use helm_release in terraform, if something goes wrong, pod status is stay at "pending" or "Error", "CreateContainer" or some other unusual status for a little longer time, Helm terraform provider will not wait until pods are running, it will exit and reported completed, However terraform state was update as failed.
Saw the same behavior today when I deployed ingress-nginx and the very first job failed because it was rejected by another webhook. The terraform apply run waited for 5 minutes but reported a success, even though not a single resource was created successful. In fact only 1 job was there it was rejected.
@jrhouston were you able to take a look at this?
I'm running into this too. I pretty regularly have a successful terraform apply
(everything shows successful and complete) and end up with helm_release
resources that show ~ status = "failed" -> "deployed"
on a second run.
I think we are hitting this as well but not entirely sure.. we are seeing helm_release pass on first run with (wait = true) where not all the pods come online because of a Gatekeeper/PSP we have in the cluster, we are not sure how get our helm_release to fail in that case
Hi all. I'm new to Terraform. I've had to split up my Terraform deployments and include a time_sleep because of this issue. Looking forward to an update here.
Same thing with helm job and wait_for_jobs = true. It will wait the timeout and then will return true. If i reapply I got the following :
$ terraform apply -var image_tag=dev-ed4854d
helm_release.job_helm_release: Refreshing state... [id=api-migration]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.job_helm_release.helm_release.job_helm_release will be updated in-place
~ resource "helm_release" "job_helm_release" {
id = "api-migration"
name = "api-migration"
~ status = "failed" -> "deployed"
# (24 unchanged attributes hidden)
# (22 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
I faced this issue, helm-release 'timeout' options seem not working, helm-relese stated as "successfully completed" with in 5 seconds , even though PODs are init stage.
me too . pod status is stay at "pending" when I use helm_release in terraform, but it worked well with Helm cli.
Error: release nginx failed, and has been uninstalled due to atomic being set: timed out waiting for the condition
I don't know what happened,but it back to normal work。In the past 6 hours, I upgraded kubernetes to 1.23.1.
resource "helm_release" "traefik" {
name = "traefik"
repository = "https://helm.traefik.io/traefik"
chart = "traefik"
version = "10.3.2"
#I just tried to add this line
wait = false
}
Versions :
bash-5.1# terraform version
Terraform v1.0.9
on linux_amd64
+ provider registry.terraform.io/hashicorp/helm v2.4.1
# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
# helm version
version.BuildInfo{Version:"v3.7.0-rc.2", GitCommit:"4a7c306aa9dcbdeecf79c7517851921a21b72e56", GitTreeState:"clean", GoVersion:"go1.16.7"}
Is anyone still encountering this issue on the latest version of the provider? I think we fixed this in https://github.com/hashicorp/terraform-provider-helm/pull/727.
Just tried to reproduce this and see the error in provider version v2.0.2 but now I see the appropriate failure diagnostic in v2.6.0.
I can't speak for everyone, but we haven't seen this issue in a while.
This happens to me as well.
Is anyone still encountering this issue on the latest version of the provider? I think we fixed this in #727.
Just tried to reproduce this and see the error in provider version v2.0.2 but now I see the appropriate failure diagnostic in v2.6.0.
Haven't tried it with the v2.6.0 version but will do so and report back, might take me a few days
Reproduced on version 2.6.0 for me
Reproduced on version 2.6.0 for me
Hello @enterdv! Are you able to include the config that you used that will help us reproduce this issue? We'll want to look into it again if we're still seeing this bug
Reproduced on version 2.6.0 for me
Hello @enterdv! Are you able to include the config that you used that will help us reproduce this issue? We'll want to look into it again if we're still seeing this bug
Hello, I tried with simple helm release
resource "helm_release" "redis" {
name = "${var.project}-redis"
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
version = "17.0.5"
atomic = true
create_namespace = true
namespace = "${var.project}-infra"
values = [
file("${path.module}/values.yaml")
]
set {
name = "fullnameOverride"
value = "${var.project}-redis"
}
set {
name = "master.persistence.size"
value = var.storage_size
}
set {
name = "master.resources.requests.memory"
value = var.memory
}
set {
name = "master.resources.requests.cpu"
value = var.cpu
}
set {
name = "master.resources.limits.memory"
value = var.memory
}
set {
name = "master.resources.limits.cpu"
value = var.cpu
}
set {
name = "replica.persistence.size"
value = var.storage_size
}
set {
name = "replica.resources.requests.memory"
value = var.memory
}
set {
name = "replica.resources.requests.cpu"
value = var.cpu
}
set {
name = "replica.resources.limits.memory"
value = var.memory
}
set {
name = "replica.resources.limits.cpu"
value = var.cpu
}
set {
name = "replica.replicaCount"
value = var.replica_count
}
set {
name = "sentinel.quorum"
value = var.sentinel_quorum
}
}
@enterdv Hello! Thank you for providing the TF config, could you provide the output after running TF_LOG=debug terraform apply
me too . pod status is stay at "pending" when I use helm_release in terraform, but it worked well with Helm cli.
Error: release nginx failed, and has been uninstalled due to atomic being set: timed out waiting for the condition
Have you fixed this problem?
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
up
up
I am still seeing this issue in 2.15.0
. To note, on the first install neither wait
nor wait_on_jobs
achieves the desired result.
For my use case, I have a helm-release that generates self-signed certificates with cert-manager and the release gets marked successful ahead of the certificates actually being signed (causing downstream failures in other terraform modules).
My workaround is to create an addition time_sleep
resource on any helm_release
that has down stream dependency (but I have to "guess" at how long to wait).
resource "time_sleep" "wait_for_signing" {
depends_on = [helm_release.cluster-issuer-self-signed]
create_duration = "60s"
}
# Export Self-Signed TLS
data "kubernetes_secret" "self-signed-tls-certs" {
...
depends_on = [time_sleep.wait_for_signing] # Can't read the secrets before they are created
}
Terraform, Provider, Kubernetes and Helm Versions
Terraform version: 0.14.4 Provider version: 2.0.2 Kubernetes version: AWS EKS 1.18 Helm version: 3
Affected Resource(s)
Debug Output
https://gist.github.com/FischlerA/7930aff18d68a7b133ff22aadc021517
Steps to Reproduce
terraform apply
Expected Behavior
The helm deployment should fail since the pod that is being deployed is running an image that will always fail. (private image which i can't share)
Actual Behavior
The first time the helm release is deployed it always succeeds after reaching the timeout (5 min), any further deployments fail as they are supposed to after reaching the timeout (5min).
Community Note