kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.93k stars 1.46k forks source link

Cannot reach ClusterIP pods with target-type: ip #1325

Closed agaudreault closed 4 years ago

agaudreault commented 4 years ago

Hi, I am trying to configure the alb-ingress-controller and I was wondering what should be the recommended/production way to use aws-alb-ingress-controller to expose pods via Ingress. Should they be of type NodePort, LoadBalancer or ClusterIP ?

My goal is to have a single ALB with multiple Route53 A records associated to it. Then the ALB listener redirects traffic to the proper Target Group based on the host and path. Finally, the Target Group redirects the request to the cluster and the k8s Ingress redirects it to the pods.

When I specify the target-type: ip annotations on my ingress object with a ClusterIP pod, the target group seems to be created properly, but the instance does not seem to be healthy. My guess is that the IP 10.104.24.60 only exist in the cluster.

image

NAME                                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
prometheus-operator-grafana                    ClusterIP   172.20.39.134   <none>        80/TCP                       17d
NAME                                          READY   STATUS    RESTARTS   AGE   IP             NODE                            NOMINATED NODE   READINESS GATES
prometheus-operator-grafana-9658649f8-hdsx8   2/2     Running   0          17d   10.104.24.60   ip-10-104-24-175.ec2.internal   <none>           <none>
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      alb.ingress.kubernetes.io/scheme: internal
      alb.ingress.kubernetes.io/target-type: ip
      kubernetes.io/ingress.class: alb
    name: prometheus-operator-grafana
    namespace: monitoring
  spec:
    rules:
    - host: grafana.tools.example.com
      http:
        paths:
        - backend:
            serviceName: prometheus-operator-grafana
            servicePort: 80
          path: /
  status:
    loadBalancer:
      ingress:
      - hostname: internal-monitoring-promet-1234567.us-east-1.elb.amazonaws.com

The EKS cluster is pretty much vanilla and I have installed aws-alb-ingress-controller and aws-external-dns. I saw some reference to the ENI in the doc and I was wondering if it was necessary having it. It seems like the limit of IP that can be exposed is quite low.

agaudreault commented 4 years ago

So the unhealthy check was caused by the path being / by default in the Ingress. The listener created on the ALB was accepting request only for grafana.tools.example.com and / and would return 404 for the redirection to /login.

Changing the path in the Ingress to /* fixed the routing issue and adding alb.ingress.kubernetes.io/healthcheck-path: /api/health annotation on the grafana health check now returns healthy.

So using target-type: ip with ClusterIP seems to work.

And for the single ALB configuration, I will keep track of https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/914.

eskp commented 3 years ago

Thanks for reporting back with the solution @agaudreault-jive , helped me out!

PoojaHoney commented 1 year ago

Hi,

I was trying to create a alb load balancer with k8s ingress in aws. I have used the service as ClusterIP and changes the target-type in ingress to ip, but still its not working fine for me

This is my deployment file: image

This is my ingress file image

This is the output when I run k8s get ingress command: image

Can you please help me out this

omidraha commented 1 year ago

I have same issue with Pulumi.

The target-type of ingress is set to ip, and the type of service is set to ClusterIP.

The Load balancer file:

    kubernetes.helm.v3.Chart(
            "lb",
            kubernetes.helm.v3.ChartOpts(
                chart="aws-load-balancer-controller",
                fetch_opts=kubernetes.helm.v3.FetchOpts(
                    repo="https://aws.github.io/eks-charts"
                ),
                namespace=namespace.metadata["name"],
                values={
                    "logLevel": "debug",
                    "region": "us-west-2",
                    "replicaCount": "1",
                    "serviceAccount": {
                        "name": "aws-lb-controller-serviceaccount",
                        "create": False,
                    },
                    "vpcId": vpc.vpc_id,
                    "clusterName": cluster_name,
                    "podLabels": {
                        "app": "aws-lb-controller"
                    },
                    "autoDiscoverAwsRegion": "true",
                    "autoDiscoverAwsVpcID": "true",
                    "keepTLSSecret": True,
                },
            ),
            pulumi.ResourceOptions(
                provider=provider,
                parent=namespace
            )
        )

The Ingress file:


    kubernetes.networking.v1.Ingress(
        "ingress",
        metadata=kubernetes.meta.v1.ObjectMetaArgs(
            name='ingress',
            namespace=namespace.metadata["name"],
            annotations={
                "kubernetes.io/ingress.class": "alb",
                "alb.ingress.kubernetes.io/target-type": "ip",
                "alb.ingress.kubernetes.io/scheme": "internet-facing",
                'external-dns.alpha.kubernetes.io/hostname': 'app1.example.com,app2.example.com',
                'alb.ingress.kubernetes.io/certificate-arn': arn,
                'alb.ingress.kubernetes.io/listen-ports': '[{"HTTPS":443}, {"HTTP":80}]',
                'alb.ingress.kubernetes.io/ssl-redirect': '443',
                'alb.ingress.kubernetes.io/load-balancer-attributes': 'idle_timeout.timeout_seconds=600',
                'alb.ingress.kubernetes.io/target-group-attributes':
                    'stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60'
            },
            labels={
                'app': 'ingress'
            },
        ),
        spec=kubernetes.networking.v1.IngressSpecArgs(
            rules=[
                kubernetes.networking.v1.IngressRuleArgs(
                    host='app1.example.com',
                    http=kubernetes.networking.v1.HTTPIngressRuleValueArgs(
                        paths=[
                            kubernetes.networking.v1.HTTPIngressPathArgs(
                                path="/*",
                                path_type="Prefix",
                                backend=kubernetes.networking.v1.IngressBackendArgs(
                                    service=kubernetes.networking.v1.IngressServiceBackendArgs(
                                        name=service_app_01.metadata.name,
                                        port=kubernetes.networking.v1.ServiceBackendPortArgs(
                                            number=80,
                                        ),
                                    ),
                                ),
                            ),
                        ],
                    ),
                ),
                kubernetes.networking.v1.IngressRuleArgs(
                    host='app2.example.com',
                    http=kubernetes.networking.v1.HTTPIngressRuleValueArgs(
                        paths=[
                            kubernetes.networking.v1.HTTPIngressPathArgs(
                                path="/*",
                                path_type="Prefix",
                                backend=kubernetes.networking.v1.IngressBackendArgs(
                                    service=kubernetes.networking.v1.IngressServiceBackendArgs(
                                        name=service_app_02.metadata.name,
                                        port=kubernetes.networking.v1.ServiceBackendPortArgs(
                                            number=80,
                                        ),
                                    ),
                                ),
                            ),
                        ],
                    ),
                )
            ],
        ),
        opts=pulumi.ResourceOptions(provider=provider)
    )

The Service file:


    def create_service_app(provider):
        srv = kubernetes.core.v1.Service(
            "srv",
            metadata=metadata_1,
            spec=kubernetes.core.v1.ServiceSpecArgs(
                type="ClusterIP",
                ports=[kubernetes.core.v1.ServicePortArgs(
                    port=80,
                    target_port="http",
                    protocol="TCP",
                )],
                selector=app_labels_1,
            ),
            opts=pulumi.ResourceOptions(provider=provider)
        )
        return srv

The Deployment file:


    def create_deployment_app(provider):
        dep = kubernetes.apps.v1.Deployment(
            "dep",
            metadata=metadata_1,
            spec=kubernetes.apps.v1.DeploymentSpecArgs(
                replicas=1,
                selector=kubernetes.meta.v1.LabelSelectorArgs(
                    match_labels=app_labels_1,
                ),
                template=kubernetes.core.v1.PodTemplateSpecArgs(
                    metadata=metadata_1,
                    spec=kubernetes.core.v1.PodSpecArgs(
                        containers=[kubernetes.core.v1.ContainerArgs(
                            name=app_name_1,
                            image="nginxdemos/hello",
                            ports=[kubernetes.core.v1.ContainerPortArgs(
                                name="http",
                                container_port=80,
                            )],
                        )],
                    ),
                ),
            ),
            opts=pulumi.ResourceOptions(provider=provider))
        return dep_01

Any help?