Open onzo-operry opened 7 years ago
@onzo-operry please specify the version of mate you are running - nevermind, I see you are running 0.5.1, please try master branch. First thing I would suggest you to run the latest master code without --sync-only
enabled and check if crash persists. And please format your original question :)
As for why it is not created, do you have prod.onzo.cloud
hosted zone in your AWS account ?
you should be getting two RRS created:
So to double confirm none of these two are created ?
Sorry about the formatting, it's been a long day..... I will try and compile off master tomorow, but I can confirm that
So running locally from
commit 4a5b4eb8f0ba9162e24d576fb4f46f3f8c8f8392
Merge: 4f86371 7dc09b2
Date: Tue Feb 14 11:12:17 2017 +0100
Merge pull request #84 from linki/annotations
make mate play well with other ext dns controllers
mate --producer=kubernetes --kubernetes-format={{.Namespace}}-{{.Name}}.prodb.onzo.cloud --consumer=aws --aws-record-group-id=my-cluster --debug --kubernetes-server=http://127.0.0.1:8001
INFO[0000] [AWS] Listening for events...
DEBU[0000] [Synchronize] Sleeping for 1m0s...
INFO[0000] ADDED: kube-system/default-http-backend
WARN[0000] [Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress.
INFO[0000] ADDED: default/shop-svc
WARN[0000] [Service] The load balancer of service 'default/shop-svc' does not have any ingress.
INFO[0000] ADDED: infra/idb-1-influxdb
WARN[0000] [Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress.
INFO[0000] ADDED: kube-system/tiller-deploy
WARN[0000] [Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress.
INFO[0000] ADDED: infra/default-http-backend
WARN[0000] [Service] The load balancer of service 'infra/default-http-backend' does not have any ingress.
INFO[0000] ADDED: default/kubernetes
WARN[0000] [Service] The load balancer of service 'default/kubernetes' does not have any ingress.
INFO[0000] ADDED: kube-system/kubernetes-dashboard
WARN[0000] [Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress.
INFO[0000] ADDED: infra/elasticsearch-discovery
WARN[0000] [Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress.
INFO[0000] ADDED: infra/logstash-internal
WARN[0000] [Service] The load balancer of service 'infra/logstash-internal' does not have any ingress.
INFO[0000] ADDED: infra/kibana-service
WARN[0000] [Service] The load balancer of service 'infra/kibana-service' does not have any ingress.
INFO[0000] ADDED: kube-system/kube-dns
WARN[0000] [Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress.
INFO[0000] ADDED: default/ingress
INFO[0000] [AWS] Processing (a1.prodb.onzo.cloud., 10.55.53.140, )
INFO[0000] ADDED: infra/elasticsearch-internal
WARN[0000] [Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress.
INFO[0000] ADDED: infra/infra-ingress-ingress
DEBU[0001] Getting a page of ALBs of length: 0
DEBU[0001] Getting a page of ELBs of length: 4
ERRO[0001] Canonical Zone ID for endpoint: is not found
INFO[0001] [AWS] Processing (infra-infra-ingress-ingress.prodb.onzo.cloud., , afc504560f92911e69d4xxxxxxxxxxx-11111111111.eu-west-1.elb.amazonaws.com)
ERRO[0001] Failed to process endpoint. Alias could not be constructed for: a1.prodb.onzo.cloud.:.
INFO[0001] ADDED: default/ingress-service
DEBU[0001] Getting a page of ALBs of length: 0
DEBU[0001] Getting a page of ELBs of length: 4
INFO[0001] [AWS] Processing (default-ingress-service.prodb.onzo.cloud., , a672c7c16f93c11e6a46exxxxxxxxx-111111111111.eu-west-1.elb.amazonaws.com)
DEBU[0001] Getting a page of ALBs of length: 0
DEBU[0001] Getting a page of ELBs of length: 4
^CINFO[0018] Shutdown signal received, exiting...
INFO[0018] [Ingress] Exited monitoring loop.
INFO[0018] [Synchronize] Exited synchronization loop.
INFO[0018] [Kubernetes] Exited monitoring loop.
INFO[0018] [AWS] Exited consuming loop.
INFO[0018] [Service] Exited monitoring loop.
INFO[0018] [Noop] Exited monitoring loop.
This has created 2 A and 2 TXT's
infra-infra-ingress-ingress.prodb.onzo.cloud default-ingress-service.prodb.onzo.cloud
@onzo-operry annotated-nginx.prod.onzo.cloud
this one will be created if you have prod.onzo.cloud
(note missing b in prod
) hosted zone as well.
I will create and deploy a new release which should fix the crashing problem - I believe it is already fixed in the master branch, but not released for some reason :(
So the annotation was commented out for the above run , I put it back in and applied the yaml with the correct "prodb", and have pasted the output below (this is running off master)
annotated-nginx .prodb.onzo.cloud. has been created A & TXT
but a1.prodb.onzo.cloud has not.
Question , do the ingress services need to be annotated for this to work?
INFO[0000] [AWS] Listening for events...
DEBU[0000] [Synchronize] Sleeping for 1m0s...
INFO[0000] ADDED: kube-system/tiller-deploy
WARN[0000] [Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress.
INFO[0000] ADDED: infra/elasticsearch-discovery
WARN[0000] [Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress.
INFO[0000] ADDED: infra/infra-ingress-ingress
INFO[0000] ADDED: default/ingress-service
INFO[0000] [AWS] Processing (infra-infra-ingress-ingress.prodb.onzo.cloud., , afc504560f92911e69d410a82-1111111111.eu-west-1.elb.amazonaws.com)
INFO[0000] ADDED: default/ingress
DEBU[0001] Getting a page of ALBs of length: 0
DEBU[0001] Getting a page of ELBs of length: 4
WARN[0001] Record [name=infra-infra-ingress-ingress.prodb.onzo.cloud.] could not be created, another record with same name already exists
INFO[0001] [AWS] Processing (annotated-nginx.prodb.onzo.cloud, , a672c7c16f93c11e6a46e-1111111.eu-west-1.elb.amazonaws.com)
INFO[0001] ADDED: default/shop-svc
WARN[0001] [Service] The load balancer of service 'default/shop-svc' does not have any ingress.
INFO[0001] ADDED: kube-system/kube-dns
WARN[0001] [Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress.
INFO[0001] ADDED: kube-system/kubernetes-dashboard
WARN[0001] [Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress.
INFO[0001] ADDED: default/kubernetes
WARN[0001] [Service] The load balancer of service 'default/kubernetes' does not have any ingress.
INFO[0001] ADDED: kube-system/default-http-backend
WARN[0001] [Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress.
INFO[0001] ADDED: infra/idb-1-influxdb
WARN[0001] [Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress.
INFO[0001] ADDED: infra/kibana-service
WARN[0001] [Service] The load balancer of service 'infra/kibana-service' does not have any ingress.
INFO[0001] ADDED: infra/default-http-backend
WARN[0001] [Service] The load balancer of service 'infra/default-http-backend' does not have any ingress.
INFO[0001] ADDED: infra/elasticsearch-internal
WARN[0001] [Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress.
INFO[0001] ADDED: infra/logstash-internal
WARN[0001] [Service] The load balancer of service 'infra/logstash-internal' does not have any ingress.
DEBU[0002] Getting a page of ALBs of length: 0
DEBU[0002] Getting a page of ELBs of length: 4
INFO[0002] [AWS] Processing (a1.prodb.onzo.cloud., 10.55.53.140, )
DEBU[0002] Getting a page of ALBs of length: 0
DEBU[0002] Getting a page of ELBs of length: 4
ERRO[0002] Canonical Zone ID for endpoint: is not found
ERRO[0002] Failed to process endpoint. Alias could not be constructed for: a1.prodb.onzo.cloud.:.
INFO[0060] [Synchronize] Synchronizing DNS entries...
INFO[0060] ADDED: infra/logstash-internal
WARN[0060] [Service] The load balancer of service 'infra/logstash-internal' does not have any ingress.
INFO[0060] ADDED: infra/infra-ingress-ingress
INFO[0060] [AWS] Processing (infra-infra-ingress-ingress.prodb.onzo.cloud., , afc504560f92911e69d410e-1111111111.eu-west-1.elb.amazonaws.com)
INFO[0060] ADDED: kube-system/kube-dns
WARN[0060] [Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress.
INFO[0060] ADDED: infra/kibana-service
WARN[0060] [Service] The load balancer of service 'infra/kibana-service' does not have any ingress.
INFO[0060] ADDED: default/ingress-service
WARN[0060] [Service] The load balancer of service 'default/kubernetes' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'default/shop-svc' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'infra/default-http-backend' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'infra/kibana-service' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'infra/logstash-internal' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress.
WARN[0060] [Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress.
INFO[0060] ADDED: default/ingress
DEBU[0060] Getting a page of ALBs of length: 0
DEBU[0060] Getting a page of ELBs of length: 4
ERRO[0060] Canonical Zone ID for endpoint: is not found
DEBU[0061] Getting a page of ALBs of length: 0
DEBU[0061] Getting a page of ELBs of length: 4
WARN[0061] Record [name=infra-infra-ingress-ingress.prodb.onzo.cloud.] could not be created, another record with same name already exists
INFO[0061] [AWS] Processing (annotated-nginx.prodb.onzo.cloud, , a672c7c16f93c11e6a4-1111111111.eu-west-1.elb.amazonaws.com)
INFO[0061] ADDED: default/kubernetes
WARN[0061] [Service] The load balancer of service 'default/kubernetes' does not have any ingress.
INFO[0061] ADDED: kube-system/tiller-deploy
WARN[0061] [Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress.
INFO[0061] ADDED: infra/elasticsearch-internal
WARN[0061] [Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress.
INFO[0061] ADDED: infra/default-http-backend
WARN[0061] [Service] The load balancer of service 'infra/default-http-backend' does not have any ingress.
INFO[0061] ADDED: default/shop-svc
WARN[0061] [Service] The load balancer of service 'default/shop-svc' does not have any ingress.
INFO[0061] ADDED: kube-system/kubernetes-dashboard
WARN[0061] [Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress.
INFO[0061] ADDED: infra/elasticsearch-discovery
WARN[0061] [Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress.
INFO[0061] ADDED: kube-system/default-http-backend
WARN[0061] [Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress.
INFO[0061] ADDED: infra/idb-1-influxdb
WARN[0061] [Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress.
DEBU[0061] Getting a list of AWS RRS of length: 22
DEBU[0061] Records to be upserted: []
DEBU[0061] Records to be deleted: [{
AliasTarget: {
DNSName: "a672c7c16f93c11e6a46e02-11111111111.eu-west-1.elb.amazonaws.com.",
EvaluateTargetHealth: true,
HostedZoneId: "Z32O12XQLXXXXX"
},
Name: "default-ingress-service.prodb.onzo.cloud.",
Type: "A"
} {
Name: "default-ingress-service.prodb.onzo.cloud.",
ResourceRecords: [{
Value: "\"mate:my-cluster\""
}],
TTL: 300,
Type: "TXT"
}]
DEBU[0061] Getting a page of ALBs of length: 0
DEBU[0061] Getting a page of ELBs of length: 4
DEBU[0061] [Synchronize] Sleeping for 1m0s...
WARN[0061] Record [name=annotated-nginx.prodb.onzo.cloud] could not be created, another record with same name already exists
INFO[0061] [AWS] Processing (a1.prodb.onzo.cloud., 10.55.53.140, )
DEBU[0061] Getting a page of ALBs of length: 0
DEBU[0061] Getting a page of ELBs of length: 4
ERRO[0061] Canonical Zone ID for endpoint: is not found
ERRO[0061] Failed to process endpoint. Alias could not be constructed for: a1.prodb.onzo.cloud.:.
I reckon the reason is ELB address is not reported on the ingress resource. Could you please try:
kubectl get -o json ingress ingress
and paste the output here. I am not really familiar with the nginx-ingress-controller, but I guess they are reporting back the POD cluster IP, which obviously cannot be populated as the RRS target (and obviously there is no associated canonical hosted zone :P )
So adding a little debugging to the code I get this for the value of the variable ep which I guess mirrors the json below
DEBU[0121] &pkg.Endpoint{DNSName:"a1.prodb.onzo.cloud.", IP:"10.55.53.140", Hostname:""}
{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"kind\":\"Ingress\",\"apiVersion\":\"extensions/v1beta1\",\"metadata\":{\"name\":\"ingress\",\"creationTimestamp\":null},\"spec\":{\"rules\":[{\"host\":\"a1.prodb.onzo.cloud\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"shop-svc\",\"servicePort\":80}}]}}]},\"status\":{\"loadBalancer\":{}}}"
},
"creationTimestamp": "2017-02-22T20:20:55Z",
"generation": 2,
"name": "ingress",
"namespace": "default",
"resourceVersion": "2251915",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/ingresses/ingress",
"uid": "6a5f769d-f93c-11e6-9d41-0a82ab2f03e5"
},
"spec": {
"rules": [
{
"host": "a1.prodb.onzo.cloud",
"http": {
"paths": [
{
"backend": {
"serviceName": "shop-svc",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "10.55.53.140"
}
]
}
}
}
Reading much earlier today made me assume that the nginx ingress would work #77 and I am on the same version
"ip": "10.55.53.140"
Yes, it is a compatibility issue with nginx-ingress-controller. Unfortunately this information is not enough to set up an Alias A record on Route53 :(
There is an alternative setup which we are using to enable ingress on our clusters, we were planning to document it here in next few days. Basically we use https://github.com/zalando-incubator/kube-ingress-aws-controller to provision SSL enabled ALBs and populate the ingress resource field with the ALB full DNS address. ALB points to the internal proxy https://github.com/zalando/skipper (running as daemon set) which enables route trafficking within the cluster.
It fits perfectly with Mate and makes the setup super easy, as I have said we will document it properly, but feel free to ask questions, if you have any :)
ok thanks man , much appreciate your time, I will take a look at the links above and have a play.
So thinking about it this morning, one of the things might be the fact that we use aws with private networking, so the nodes don't have public addresses , the ip address above is a private IP of one of the k8s nodes, I was expecting mate would be populating the DNS entry of the ingress with the ELB address, or is that not how it works?
@onzo-operry this is more related to the way nginx-controller works, because the address is reported from the nginx-controller. Unfortunately since in Route53 we only create Alias records, public/private IP cannot be supported. This is the part from Amazon official documentation
An alias resource record set can only point to a CloudFront distribution, an Elastic Beanstalk environment, an ELB load balancer, an Amazon S3 bucket that is configured as a static website, or another resource record set in the same Amazon Route 53 hosted zone in which you're creating the alias resource record set.
@onzo-operry could you please give a try to latest release v0.6.0 and let us know if it helped you with the problem. It should create an A record pointing to the IP address specified in ingress resource :)
v0.6.0
makes mate
compatible with plain A
records to IPs on AWS. If nginx-controller
puts in the private IP of the node to the Ingress status then it will still not be accessible from the outside. I'm afraid, this really is where mate
's responsibility ends.
I believe nginx-controller
was designed to be used in circumstances where you don't have access to a cloud loadbalancer. Therefore, using nginx-controller
and relying on an ELB to route traffic to your nodes kind of defeats the purpose. However, since no Amazon (A|E)LB supports hostname based routing this is a valid setup.
Hi,
I have been banging my head at this for hours now, really not sure what I am doing so wrong. so I am trying the simplest setup where the ingress hosts should create r53 records in aws i assume?
any hints would be welcome
if I remove the ingress-nginx , mate doesn't crash, but then again it doesn't really do anything either around creating records
here is the log output
ingress.yaml
service.yaml
nginx-rc.yaml
dummy backend
mate.yaml
logs running sync-only
time="2017-02-22T21:33:00Z" level=debug msg="[Synchronize] Sleeping for 1m0s..." [operry@peek01 default]$ kubectl logs mate-2617052176-2n5ws time="2017-02-22T21:33:00Z" level=debug msg="[Synchronize] Sleeping for 1m0s..." [operry@peek01 default]$ kubectl logs mate-2617052176-2n5ws time="2017-02-22T21:33:00Z" level=debug msg="[Synchronize] Sleeping for 1m0s..." [operry@peek01 default]$ kubectl logs mate-2617052176-2n5ws time="2017-02-22T21:33:00Z" level=debug msg="[Synchronize] Sleeping for 1m0s..." time="2017-02-22T21:34:00Z" level=info msg="[Synchronize] Synchronizing DNS entries..." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'default/kubernetes' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'default/shop-svc' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/default-http-backend' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/kibana-service' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/logstash-internal' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress." time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress." time="2017-02-22T21:34:01Z" level=debug msg="Getting a page of ALBs of length: 0" time="2017-02-22T21:34:01Z" level=debug msg="Getting a page of ELBs of length: 4" time="2017-02-22T21:34:01Z" level=error msg="Canonical Zone ID for endpoint: is not found" time="2017-02-22T21:34:02Z" level=warning msg="Hosted zone for endpoint: annotated-nginx.prod.onzo.cloud. is not found. Skipping record..." time="2017-02-22T21:34:02Z" level=debug msg="Getting a list of AWS RRS of length: 16" time="2017-02-22T21:34:02Z" level=debug msg="Records to be upserted: [{\n AliasTarget: {\n DNSName: \"afc504560f9aa21e69d410a82ab2f03e-1278601154.eu-west-1.elb.amazonaws.com.\",\n EvaluateTargetHealth: true,\n HostedZoneId: \"Z32OAAAAAAAAAA\"\n },\n Name: \"infra-infra-ingress-ingress.prodb.onzo.cloud.\",\n Type: \"A\"\n} {\n Name: \"infra-infra-ingress-ingress.prodb.onzo.cloud.\",\n ResourceRecords: [{\n Value: \"\\"mate:my-cluster\\"\"\n }],\n TTL: 300,\n Type: \"TXT\"\n}]" time="2017-02-22T21:34:02Z" level=debug msg="Records to be deleted: []" time="2017-02-22T21:34:02Z" level=debug msg="[Synchronize] Sleeping for 1m0s..."