Open emilchitas opened 3 years ago
I seem to have run into a similar situation as well. I am currently using data source with tags to list the information for one of my 4 application/network load balancers but it always return the full amount of my load balancers, which is 4. Even if I leave the tags empty, it still returns 4. I tried adding new tags, specifically to the one that I want to get as a data source, and it still returns me all of them. I am currently using terraform 0.12.29. Hopefully we can get a reply or something regarding this.
I could not reproduce the issue. This works fine for me using 3.58 version.
Same issue with provider v.3.63.0. Did a TF_LOG=debug and it looks like the response coming from AWS doesn't even contain the internal LBs. Had to do:
data "kubernetes_service" "mylb" {
metadata {
name = "my-lb-name"
namespace = "namespace"
}
}
data "aws_elb" "mylb" {
name = split("-", split(".", data.kubernetes_service.mylb.status.0.load_balancer.0.ingress.0.hostname).0).1
}
Whis is crap, but at least it works :\
Edit: only have this problem for internal LBs
@csabakollar I just ran into this issue - and though I agree with your assessment...at least you found something working. Thank you!
Basically everything work as intended for me until I had two LB's in the same environment - then my tags... though specific to an individual LB, would bring back all the LB's rather than filter by tag.
I can also reproduce v3.48.0
any updates on this?
Seems to be working on the new 4.31.0 version. The version 3.75.0 is not working for me.
Got the same issue with aws
provider v4.57.1 and kubernetes
provider 2.13.0
I am using a local helm_release
to deploy Kubernetes Ingress which with AWS LB controller
creates an Application Load balancer
.
Data query fails if I am using : data "aws_lb" or data "kubernetes_ingress_v1"
data "aws_alb" "internal" {
tags = {
"elbv2.k8s.aws/cluster" = local.local_prefix_with_env_suffix
}
depends_on = [
helm_release.eks_alb_ingress
]
}
data "kubernetes_ingress_v1" "example" {
metadata {
name = "services"
namespace = "istio-ingress"
}
depends_on = [ helm_release.eks_alb_ingress ]
}
output "test" {
value = <<EOF
${data.kubernetes_ingress_v1.example.status.0.load_balancer.0.ingress.0.hostname},
${data.aws_alb.internal.dns_name}
EOF
}
│ Error: Search returned 0 results, please revise so only one is returned
│
│ with data.aws_alb.internal,
│ Error: Invalid index
│
│ on output.tf line 39, in output "test":
│ 39: value = data.kubernetes_ingress_v1.example.status.0.load_balancer.0.ingress.0.hostname
Error seems to be caused by the fact that I am querying AWS/EKS right after ALB creation, where in fact it requires a delay before LB becomes visible. terraform apply
fails on the first run, but succeeds on the second.
Our workaround was to introduce a 15 second delay after creation of the Load balancer, before querying AWS or EKS, using time
provider and time_sleep
resource
terraform {
required_providers {
time = {
source = "hashicorp/time"
version = "0.9.1"
}
}
}
resource "time_sleep" "wait_15_seconds" {
create_duration = "15s"
depends_on = [helm_release.eks_alb_ingress]
}
data "aws_alb" "internal" {
tags = {
"elbv2.k8s.aws/cluster" = local.local_prefix_with_env_suffix
}
depends_on = [
time_sleep.wait_15_seconds
]
}
data "kubernetes_ingress_v1" "example" {
metadata {
name = "services"
namespace = "istio-ingress"
}
depends_on = [ time_sleep.wait_15_seconds ]
}
If possible, I think adding a timeout
or retry
option to aws_lb
data source would be great.
Community Note
Terraform CLI and Terraform AWS Provider Version
Terraform v1.0.3 on linux_amd64
Affected Resource(s)
Kubernetes service configuration:
We have a LoadBalancer provisioned in AWS, which is created by the aws lb controller. The service is configured like this:
Terraform Configuration Files
Expected Behavior
Terraform identifies the LoadBalancer based on the applied tag.
Actual Behavior
Terraform fails with the following error:
even though 2 lines above, the LoadBalancer id appears in the log:
Steps to Reproduce
terraform apply
Additional notes:
This works with the v0.51.0 version of the aws provider. As a workaround, we fixed the version to this in the provider configuration.