gyselroth / kube-icinga

Monitor kubernetes services / resources using icinga2 (including autodiscovery support)
MIT License
35 stars 4 forks source link

LoadBalancers services using node port instead of target port #21

Closed davidasnider closed 5 years ago

davidasnider commented 5 years ago

Describe the bug

Using MetalLB on Raspberry Pi and X86 servers, Loadbalanced services end up with an Icinga service definition that uses a NodePort instead of the targetPort. NodePort is assigned but not used.

To Reproduce

Create a service based on MetalLB such as this:

$ kubectl -n icinga describe service icinga-server
Name:                     icinga-server
Namespace:                icinga
Labels:                   <none>
Annotations:              kube-icinga/host: icinga-sec.thesniderpad.com
                          kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"kube-icinga/host":"icinga-sec.thesniderpad.com","metallb.universe.tf/allow...
                          metallb.universe.tf/allow-shared-ip: icinga
Selector:                 app=icinga-server
Type:                     LoadBalancer
IP:                       10.108.186.186
IP:                       10.9.9.206
LoadBalancer Ingress:     10.9.9.206
Port:                     api  5665/TCP
TargetPort:               5665/TCP
NodePort:                 api  31657/TCP
Endpoints:                10.244.6.31:5665
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age                From                Message
  ----    ------        ----               ----                -------
  Normal  IPAllocated   54m                metallb-controller  Assigned IP "10.9.9.206"
  Normal  nodeAssigned  31m (x7 over 53m)  metallb-speaker     announcing from node "ia01"

In the above example, the container is listening on the TargetPort/Endpoint, not the NodePort as can be validated by running these commands from a container inside the cluster:

Failure example using IP and Nodeport

curl -k -s -u user:pass 'https://10.108.186.186:31657/v1'

< nothing was returned >

Success example using IP and Targetport

root@icinga-web:/# curl -k -s -u user:pass 'https://10.108.186.186:5665/v1'
<html><head><title>Icinga 2</title></head><h1>Hello from Icinga 2 (Version: r2.10.5-1)!</h1><p>You are authenticated as <b>root</b>. Your user has the following permissions:</p> <ul><li>*</li></ul><p>More information about API requests is available in the <a href="https://docs.icinga.com/icinga2/latest" target="_blank">documentation</a>.</p></html>root@icinga-web:/#

Success example using endpoint

curl -k -s -u user:pass 'https://10.244.6.31:5665/v1'
<html><head><title>Icinga 2</title></head><h1>Hello from Icinga 2 (Version: r2.10.5-1)!</h1><p>You are authenticated as <b>root</b>. Your user has the following permissions:</p> <ul><li>*</li></ul><p>More information about API requests is available in the <a href="https://docs.icinga.com/icinga2/latest" target="_blank">documentation</a>.</p></html>root@icinga-web:/#

Expected behavior

Service created in Icinga using IP/Target Port or Endpoint

Environment

Additional context

none

raffis commented 5 years ago

@davidasnider Thanks for reporting. I've relased a new alpha for v2.1.0-alpha3. Do you mind test your case with that version? I don't have access to a kube cluster with a load balancer services right now.

davidasnider commented 5 years ago

Yes, that looks like it took care of it, but there appears to be a new problem. Before, when using the environment variable: KUBERNETES_VOLUMES_HOSTNAME, a host record was automatically created. Now, it appears not to be the case, the host record is not created and must be created manually. Is this by design?

raffis commented 5 years ago

Yes, that looks like it took care of it, but there appears to be a new problem. Before, when using the environment variable: KUBERNETES_VOLUMES_HOSTNAME, a host record was automatically created. Now, it appears not to be the case, the host record is not created and must be created manually. Is this by design?

Thanks for testing. No that's an issue in alpha3 from #22, will be fixed in the next alpha release (today).

raffis commented 5 years ago

Mhm I guess I was too fast to respond, I can't reproduce this here. Did you wait at least 30s? Can you see the host object on you icinga instance? (ls -l /var/lib/icinga2/api/packages/_api/*/conf.d/hosts). If the object is there, does icinga report any startup errors? If no can you provide kube-icinga logs?

davidasnider commented 5 years ago

OK, I think I've narrowed down the issue, it's when the Icinga server restarts.

/var/lib/icinga2/api/packages/_api/e904485a-6609-4254-925b-162bf0c0a6c8/conf.d/services/kubernetes-loadbalancer-services!rundeck-rundeck-tcp%3A80.conf(8):  vars["_kubernetes"] = true
/var/lib/icinga2/api/packages/_api/e904485a-6609-4254-925b-162bf0c0a6c8/conf.d/services/kubernetes-loadbalancer-services!rundeck-rundeck-tcp%3A80.conf(9):  vars["kubernetes"] = {

[2019-08-31 14:47:20 +0000] critical/config: Error: Validation failed for object 'kubernetes-loadbalancer-services!remote-shell-remote-shell-tcp:2222' of type 'Service'; Attribute 'host_name': Object 'kubernetes-loadbalancer-services' of type 'Host' does not exist.
Location: in /var/lib/icinga2/api/packages/_api/e904485a-6609-4254-925b-162bf0c0a6c8/conf.d/services/kubernetes-loadbalancer-services!remote-shell-remote-shell-tcp%3A2222.conf: 7:2-7:47
/var/lib/icinga2/api/packages/_api/e904485a-6609-4254-925b-162bf0c0a6c8/conf.d/services/kubernetes-loadbalancer-services!remote-shell-remote-shell-tcp%3A2222.conf(5):  display_name = "remote-shell-remote-shell-tcp:2222"
/var/lib/icinga2/api/packages/_api/e904485a-6609-4254-925b-162bf0c0a6c8/conf.d/services/kubernetes-loadbalancer-services!remote-shell-remote-shell-tcp%3A2222.conf(6):  groups = [ "remote-shell" ]
/var/lib/icinga2/api/packages/_api/e904485a-6609-4254-925b-162bf0c0a6c8/conf.d/services/kubernetes-loadbalancer-services!remote-shell-remote-shell-tcp%3A2222.conf(7):  host_name = "kubernetes-loadbalancer-services"
                                                                                                                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
davidasnider commented 5 years ago

OK, what's interesting is in the hosts directory, 'kubernetes-loadbalancer-services' doesn't exist.

root@shiraz:/mnt/SEAGATE_4TB_SLOT5/k8s/icinga/icinga-server-pvc/_api/d1feef3b-1096-4faf-8c3a-f9013d7c061f/conf.d/hosts # ls
ia02.conf   r302.conf   r304.conf   r306.conf
r301.conf   r303.conf   r305.conf
davidasnider commented 5 years ago

So, this is a completely different issue than what was previously logged, I'll close this and open a different one.

davidasnider commented 5 years ago

Just to follow up, after upgrading Icinga, the files were correctly created. I think it was a bug where the api calls weren't properly generating the files. Thanks for all the help!