Closed agaudreault closed 4 years ago
So the unhealthy check was caused by the path being /
by default in the Ingress. The listener created on the ALB was accepting request only for grafana.tools.example.com
and /
and would return 404 for the redirection to /login
.
Changing the path in the Ingress to /*
fixed the routing issue and adding alb.ingress.kubernetes.io/healthcheck-path: /api/health
annotation on the grafana health check now returns healthy.
So using target-type: ip
with ClusterIP
seems to work.
And for the single ALB configuration, I will keep track of https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/914.
Thanks for reporting back with the solution @agaudreault-jive , helped me out!
Hi,
I was trying to create a alb load balancer with k8s ingress in aws. I have used the service as ClusterIP and changes the target-type in ingress to ip, but still its not working fine for me
This is my deployment file:
This is my ingress file
This is the output when I run k8s get ingress command:
Can you please help me out this
I have same issue with Pulumi
.
The target-type
of ingress
is set to ip
, and the type
of service
is set to ClusterIP
.
The Load balancer
file:
kubernetes.helm.v3.Chart(
"lb",
kubernetes.helm.v3.ChartOpts(
chart="aws-load-balancer-controller",
fetch_opts=kubernetes.helm.v3.FetchOpts(
repo="https://aws.github.io/eks-charts"
),
namespace=namespace.metadata["name"],
values={
"logLevel": "debug",
"region": "us-west-2",
"replicaCount": "1",
"serviceAccount": {
"name": "aws-lb-controller-serviceaccount",
"create": False,
},
"vpcId": vpc.vpc_id,
"clusterName": cluster_name,
"podLabels": {
"app": "aws-lb-controller"
},
"autoDiscoverAwsRegion": "true",
"autoDiscoverAwsVpcID": "true",
"keepTLSSecret": True,
},
),
pulumi.ResourceOptions(
provider=provider,
parent=namespace
)
)
The Ingress
file:
kubernetes.networking.v1.Ingress(
"ingress",
metadata=kubernetes.meta.v1.ObjectMetaArgs(
name='ingress',
namespace=namespace.metadata["name"],
annotations={
"kubernetes.io/ingress.class": "alb",
"alb.ingress.kubernetes.io/target-type": "ip",
"alb.ingress.kubernetes.io/scheme": "internet-facing",
'external-dns.alpha.kubernetes.io/hostname': 'app1.example.com,app2.example.com',
'alb.ingress.kubernetes.io/certificate-arn': arn,
'alb.ingress.kubernetes.io/listen-ports': '[{"HTTPS":443}, {"HTTP":80}]',
'alb.ingress.kubernetes.io/ssl-redirect': '443',
'alb.ingress.kubernetes.io/load-balancer-attributes': 'idle_timeout.timeout_seconds=600',
'alb.ingress.kubernetes.io/target-group-attributes':
'stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60'
},
labels={
'app': 'ingress'
},
),
spec=kubernetes.networking.v1.IngressSpecArgs(
rules=[
kubernetes.networking.v1.IngressRuleArgs(
host='app1.example.com',
http=kubernetes.networking.v1.HTTPIngressRuleValueArgs(
paths=[
kubernetes.networking.v1.HTTPIngressPathArgs(
path="/*",
path_type="Prefix",
backend=kubernetes.networking.v1.IngressBackendArgs(
service=kubernetes.networking.v1.IngressServiceBackendArgs(
name=service_app_01.metadata.name,
port=kubernetes.networking.v1.ServiceBackendPortArgs(
number=80,
),
),
),
),
],
),
),
kubernetes.networking.v1.IngressRuleArgs(
host='app2.example.com',
http=kubernetes.networking.v1.HTTPIngressRuleValueArgs(
paths=[
kubernetes.networking.v1.HTTPIngressPathArgs(
path="/*",
path_type="Prefix",
backend=kubernetes.networking.v1.IngressBackendArgs(
service=kubernetes.networking.v1.IngressServiceBackendArgs(
name=service_app_02.metadata.name,
port=kubernetes.networking.v1.ServiceBackendPortArgs(
number=80,
),
),
),
),
],
),
)
],
),
opts=pulumi.ResourceOptions(provider=provider)
)
The Service
file:
def create_service_app(provider):
srv = kubernetes.core.v1.Service(
"srv",
metadata=metadata_1,
spec=kubernetes.core.v1.ServiceSpecArgs(
type="ClusterIP",
ports=[kubernetes.core.v1.ServicePortArgs(
port=80,
target_port="http",
protocol="TCP",
)],
selector=app_labels_1,
),
opts=pulumi.ResourceOptions(provider=provider)
)
return srv
The Deployment
file:
def create_deployment_app(provider):
dep = kubernetes.apps.v1.Deployment(
"dep",
metadata=metadata_1,
spec=kubernetes.apps.v1.DeploymentSpecArgs(
replicas=1,
selector=kubernetes.meta.v1.LabelSelectorArgs(
match_labels=app_labels_1,
),
template=kubernetes.core.v1.PodTemplateSpecArgs(
metadata=metadata_1,
spec=kubernetes.core.v1.PodSpecArgs(
containers=[kubernetes.core.v1.ContainerArgs(
name=app_name_1,
image="nginxdemos/hello",
ports=[kubernetes.core.v1.ContainerPortArgs(
name="http",
container_port=80,
)],
)],
),
),
),
opts=pulumi.ResourceOptions(provider=provider))
return dep_01
Any help?
Hi, I am trying to configure the alb-ingress-controller and I was wondering what should be the recommended/production way to use aws-alb-ingress-controller to expose pods via Ingress. Should they be of type
NodePort
,LoadBalancer
orClusterIP
?My goal is to have a single ALB with multiple Route53 A records associated to it. Then the ALB listener redirects traffic to the proper Target Group based on the host and path. Finally, the Target Group redirects the request to the cluster and the k8s Ingress redirects it to the pods.
When I specify the
target-type: ip
annotations on my ingress object with a ClusterIP pod, the target group seems to be created properly, but the instance does not seem to be healthy. My guess is that the IP10.104.24.60
only exist in the cluster.The EKS cluster is pretty much vanilla and I have installed aws-alb-ingress-controller and aws-external-dns. I saw some reference to the ENI in the doc and I was wondering if it was necessary having it. It seems like the limit of IP that can be exposed is quite low.