Here is my cronjob:
That's what I see after a few days of run:
1 job turns into 4 jobs. Also, for each of these 4 jobs I see even more pods created:
Expected is 3 successful and 5 failed jobs (so max 8 pods in total), but I see 15 pods (probably because instead of 1 job I somehow end up with 4)?
Deployment script for reference:
resource "helm_release" "k8s_ecr_login_renew" {
repository = "https://nabsul.github.io/helm"
chart = "k8s-ecr-login-renew"
name = "k8s-ecr-login-renew"
namespace = "default"
version = "v1.0.2"
cleanup_on_fail = true
force_update = false
set {
name = "awsRegion"
value = var.aws_ecr_region
}
set {
name = "awsAccessKeyId"
value = var.aws_ecr_access_key_id
}
set {
name = "awsSecretAccessKey"
value = var.aws_ecr_secret_access_key
}
set {
name = "dockerSecretName"
value = var.aws_ecr_docker_secret_name
}
set {
name = "registries"
value = var.aws_ecr_registries
}
}
Here is my cronjob: That's what I see after a few days of run: 1 job turns into 4 jobs. Also, for each of these 4 jobs I see even more pods created: Expected is 3 successful and 5 failed jobs (so max 8 pods in total), but I see 15 pods (probably because instead of 1 job I somehow end up with 4)? Deployment script for reference: