kontena / akrobateo

Akrobateo is a simple Kubernetes operator to expose in-cluster LoadBalancer services as node hostPorts using DaemonSets.
Apache License 2.0
110 stars 15 forks source link

Named ports not supported #21

Open cbeneke opened 5 years ago

cbeneke commented 5 years ago

Hi,

I tried to create a LoadBalancer with a named port in it

apiVersion: v1
kind: Service
metadata:
  name: nginx-http
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: http
  selector:
    app: nginx

Butn the DS then goes into crashLoop because it does not translate correctly to iptables. Logs:

Setting up forwarding from port 80 to 10.106.7.233:http/TCP
iptables v1.6.2: Port `http' not valid

Try `iptables -h' or 'iptables --help' for more information.

As kubernetes supports using named targetPorts this should imho be supported by the LB provider :)

cbeneke commented 5 years ago

This should be handled in https://github.com/kontena/akrobateo/blob/master/pkg/controller/service/service_controller.go#L232 fff, but for me it seems, that its not an easy fix. The current implementation takes one port and deploys this on all machines with the akrobateo-lb docker image via IPTables, but when using a named port the port in the backends may not be equal. Compare https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service

Perhaps more interesting is that targetPort can be a string, referring to the name of a port in the backend Pods. The actual port number assigned to that name can be different in each backend Pod.

jnummelin commented 5 years ago

Yes, it's really not an easy fix at all.

To fully handle this Akrobateo should also "watch" the owning resource(s) of the pods where the selector matches and only from there map the name to a port number. This gets tricky fast as in todays Kubernetes setups there might be lot's of different operators and controllers creating pods. IMO it's not really sufficient to just check those name-port mappings from the pods alone as in any given time there might be e.g. a deployment rollout in progress and thus pods with different mappings.

So as you probably found out, the workaround is to use direct port numbers in the service.