Open lblackstone opened 5 years ago
This could be mitigated by completing #179
This can be worked around with the skipAwait
annotation now, so removing the P1
label.
I'm a little concerned about what this means for other custom Ingress
controller implementations like Contour, which lets you write rules that even balance load across clusters. We might decide not to get into this until a user tells us it's a problem, but it's also something to think about.
I'm caught in a catch-22 currently with this same issue. Not sure how to force-ssl without hanging up my Pulumi deployment.
We're getting bitten by this as well: An ingress we've defined isn't coming up selectively in certain stacks (in my dev stack at the moment), but it's working in other stacks. <- this isn't true: all stacks are seeing this issue, but for some, we seem to have manually fixed pulumi's state to no longer expect to wait for the ingress to come "up".
I'm thinking pulumi could correctly recognize that the Ingress is properly defined (it has a "SuccessfullyReconciled" event), rather than inferring that something is still missing via the rules.
So I have a work-around for this, which isn't super satisfying but does the trick: Since k8s believes the service "ssl-redirect" doesn't exist, we create a "fake" service with a name that lines up with the ingress AWS LB annotation (one with a pulumi-provided name suffix), which fulfills the k8s requirement, but still lets the AWS LBs use the annotations for the SSL redirect.
Here's roughly how we did that in python:
# This service exists so that k8s can resolve the "ssl-redirect"
# annotation on our ingress, which should help with
# https://github.com/pulumi/pulumi-kubernetes/issues/408:
fake_ssl_redirect_service = k8s.core.v1.Service(
"fake-ssl-redirect-service",
metadata=k8s.meta.v1.ObjectMetaArgs(
namespace=eks.system_cluster.mz_system_namespace.metadata.name,
),
spec=service_spec, # we use our "main" service. It's never seeing traffic through here, but just in case, sends an HSTS header.
)
ingress_rules = [
# One port 80 -> 443 redirect:
k8s.networking.v1.IngressRuleArgs(
host=hostname,
http=k8s.networking.v1.HTTPIngressRuleValueArgs(
paths=[
k8s.networking.v1.HTTPIngressPathArgs(
path="/*",
path_type="ImplementationSpecific",
backend=k8s.networking.v1.IngressBackendArgs(
service=k8s.networking.v1.IngressServiceBackendArgs(
name=fake_ssl_redirect_service.metadata.name,
port=k8s.networking.v1.ServiceBackendPortArgs(
name="use-annotation"
),
)
),
)
],
),
),
# ...and one reverse proxy the actual service:
k8s.networking.v1.IngressRuleArgs(
# ...
),
]
ingress_annotations = pulumi.Output.from_input(
fake_ssl_redirect_service.metadata.name
).apply(
lambda ssl_redirect: {
"external-dns.alpha.kubernetes.io/hostname": domains.app_hostname,
"kubernetes.io/ingress.class": "alb",
"alb.ingress.kubernetes.io/group.name": alb_group_name,
"alb.ingress.kubernetes.io/certificate-arn": root_cert.certificate_arn,
"alb.ingress.kubernetes.io/ssl-policy": SSL_POLICY,
"alb.ingress.kubernetes.io/healthcheck-path": "/api/health",
"alb.ingress.kubernetes.io/listen-ports": '[{"HTTP": 80}, {"HTTPS": 443}]',
"alb.ingress.kubernetes.io/security-groups": ingress_sg.security_group.id,
f"alb.ingress.kubernetes.io/actions.{ssl_redirect}": json.dumps(
{
"Type": "redirect",
"RedirectConfig": {
"Protocol": "HTTPS",
"Port": "443",
"StatusCode": "HTTP_301",
},
}
),
"alb.ingress.kubernetes.io/scheme": "internet-facing",
"alb.ingress.kubernetes.io/target-type": "ip",
# ...
}
)
# tying it all together:
k8s.networking.v1.Ingress(
"ingress",
metadata=k8s.meta.v1.ObjectMetaArgs(
annotations=ingress_annotations,
namespace="default",
),
spec=k8s.networking.v1.IngressSpecArgs(
rules=[ingress_rules],
),
)
I'm thinking pulumi could correctly recognize that the Ingress is properly defined (it has a "SuccessfullyReconciled" event), rather than inferring that something is still missing via the rules.
@antifuchs @orcutt989 #1260 has been implemented and will help with this. It's not released yet but see my comment here if you'd like to try using the alpha release.
Adding an annotation pulumi.com/waitFor: condition= SuccessfullyReconciled
to your Ingress resource will cause Pulumi to only wait for the condition to be met instead of checking other things like services.
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/tasks/ssl_redirect/
produces an error similar to
Reported at: https://pulumi-community.slack.com/archives/C84L4E3N1/p1549491112121100