Open AlexanderLieret opened 2 years ago
Hello,
thanks for your pull request. I'm a little confused on how your change goes along with the previous merged PR 'Add dualstack service support #187'. I'm sorry, but I don't have a k3s cluster to test. But what combinations of LoadBalancers are needed and how does the result look like? Maybe you could post your service description from kubernetes so I do understand the difference better? I just want to make sure that I do understand it so I might be able to add some form of documentation on how to configure LoadBalancers for this chart.
I just want to make sure that I do understand it so I might be able to add some form of documentation on how to configure LoadBalancers for this chart.
I hope the following explanation will help with this.
My setup consists of a single Ubuntu VM running a default k3s installation in dual stack mode. The IP addresses are 192.168.122.17
and fd17::2
.
This minimal configuration shows the problem in action. For k3s mixedService
must be off.
The dual stack feature implemented by #187 creates separate services for each IP version. In this scenario it create 6 individual services of type LoadBalancer. Depending on the load balancer implementation this is necessary.
pihole.yaml
dualStack:
enabled: true
serviceDns:
mixedService: false
type: LoadBalancer
serviceDhcp:
type: LoadBalancer
$ helm install my-pihole -f pihole.yaml mojo2600/pihole
[...]
$ kubectl get pods,services
NAME READY STATUS RESTARTS AGE
pod/svclb-my-pihole-dns-udp-sbmxb 0/1 Pending 0 4m1s
pod/svclb-my-pihole-dns-tcp-ipv6-p9xkw 0/1 Pending 0 4m
pod/svclb-my-pihole-dhcp-8cfdq 0/1 Pending 0 4m
pod/svclb-my-pihole-dhcp-ivp6-9tt5t 1/1 Running 0 4m1s
pod/svclb-my-pihole-dns-udp-ipv6-j6x5f 1/1 Running 0 4m1s
pod/svclb-my-pihole-dns-tcp-bcthd 1/1 Running 0 4m1s
pod/my-pihole-9475685d-t76lt 1/1 Running 0 4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 21d
service/my-pihole-web ClusterIP 10.43.77.36 <none> 80/TCP,443/TCP 4m1s
service/my-pihole-dns-udp LoadBalancer 10.43.37.52 <pending> 53:30873/UDP 4m1s
service/my-pihole-dns-tcp-ipv6 LoadBalancer fd17:43::5cc6 <pending> 53:31312/TCP 4m1s
service/my-pihole-dhcp LoadBalancer 10.43.124.203 <pending> 67:30036/UDP 4m1s
service/my-pihole-dhcp-ivp6 LoadBalancer fd17:43::1a5 fd17::2 67:32142/UDP 4m1s
service/my-pihole-dns-udp-ipv6 LoadBalancer fd17:43::e8b fd17::2 53:32414/UDP 4m1s
service/my-pihole-dns-tcp LoadBalancer 10.43.218.101 192.168.122.17 53:32168/TCP 4m1s
With only a single node each port (like 53/udp) can only be bound to one load balancer. The other load balancer wait indefinitly for a free port to bind to.
$ kubectl describe pod/svclb-my-pihole-dns-udp-sbmxb
[...]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
Warning FailedScheduling 3m45s default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
The DNS resolution is not working as indicated by the reported status.
Using the dual load balancer feature from this PR, we create a single LB per port.
For backwards compatibility the config value is disabled by default.
pihole.yaml
dualStack:
enabled: true
loadBalancer: true
serviceDns:
mixedService: false
type: LoadBalancer
serviceDhcp:
type: LoadBalancer
$ helm install my-pihole -f pihole.yaml ./pihole-kubernetes/charts/pihole
[...]
$ kubectl get pods,services
NAME READY STATUS RESTARTS AGE
pod/svclb-my-pihole-dns-udp-llwp6 1/1 Running 0 78s
pod/svclb-my-pihole-dns-tcp-xc9hs 1/1 Running 0 77s
pod/svclb-my-pihole-dhcp-74nbw 1/1 Running 0 77s
pod/my-pihole-79f567bb6b-tcxg2 1/1 Running 0 77s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 21d
service/my-pihole-web ClusterIP 10.43.192.250 <none> 80/TCP,443/TCP 78s
service/my-pihole-dns-udp LoadBalancer 10.43.47.51 192.168.122.17,fd17::2 53:30749/UDP 78s
service/my-pihole-dns-tcp LoadBalancer 10.43.198.1 192.168.122.17,fd17::2 53:31005/TCP 78s
service/my-pihole-dhcp LoadBalancer 10.43.170.9 192.168.122.17,fd17::2 67:30152/UDP 78s
All load balancers are running and are bound to the correct IP addresses. DNS resolution works as expected.
What loadbalancer implementation are you using, is it the default k3s one? For MetalLB my former PR is working because with MetalLB it is possible to mount multiple LBs to one port. Since the last release of MetalLB they also support true dualstack LBs.
Apart from this I think that your implementation isn't actually working because the spec.loadBalancerIP
field can't be used for dualstack services and will be deprecated in the future.
Additional information for MetalLB:
MetalLB supports
spec.loadBalancerIP
and a custommetallb.universe.tf/loadBalancerIPs
annotation. The annotation also supports a comma separated list of IPs to be used in case of Dual Stack services.Please note that
spec.LoadBalancerIP
is planned to be deprecated in k8s apis.
I use the default integrated loadbalancer of k3s.
Apart from this I think that your implementation isn't actually working because the spec.loadBalancerIP field can't be used for dualstack services
It does work for me.
The soon to be deprecated spec.LoadBalancerIP
is used in the current templates. Changing to the new field would remove your concern and clean up my code.
FYI I've opened #214 to support the dualstack LB services and remove the spec.loadBalancerIP
.
Hey @DerRockWolf @AlexanderLieret should we combine this PR and #202 and do one breaking change which adds this feature and documents the change for everyone that want's to upgrade? How could we achieve this?
@MoJo2600 Sure! I think that there isn't something to combine if we want to do it with a breaking change (i.e. remove the separate LBs feature). I've implemented this in #214.
While trying to deploy this helm chart in k3s on a dualstack setup, I noticed an error.
The integrated loadbalancer requires a single loadbalancer service per port and does not work with the split service. The external ip does stay
<pending>
because it does not have a free port.This PR fixes this issue by adding a config option to create dualstack loadbalancer services.
A minimal working example on k3s is: