k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
27.81k stars 2.33k forks source link

Traefik ingress controller dosen't listen on ports 80, 443, and 8080 on the host, but ramdon nodeport #1414

Closed thetruechar closed 2 years ago

thetruechar commented 4 years ago

Version: k3s version v1.0.0 (18bd921c)

Describe the bug Traefik ingress controller dosen't listen on ports 80, 443, and 8080 on the host, but ramdon nodeport

To Reproduce install the v1.0.0

Expected behavior The Traefik ingress controller will use ports 80, 443, and 8080 on the host

Actual behavior Traefik ingress controller listen on nodeport(like 30579) kubectl get svc --namespace=kube_system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 97m metrics-server ClusterIP 10.43.162.222 443/TCP 97m traefik LoadBalancer 10.43.20.185 172.16.24.138 80:30579/TCP,443:30051/TCP,8080:32535/TCP 96m

dabio commented 4 years ago

traefik LoadBalancer 10.43.20.185 172.16.24.138 80:30579/TCP,443:30051/TCP,8080:32535/TCP 96m

The traefik service is listening on ports 80, 443 and 8080. You should be able to access 172.16.24.138:80, 172.16.24.138:443 and 172.16.24.138:8080.

thetruechar commented 4 years ago

@dabio no, I tried... only 30579 work for http... 172.16.24.138 is the host ip

brandond commented 4 years ago

Are your iptables rules broken or something? What OS is this on?

thetruechar commented 4 years ago

ubuntu 18.04, a brand new ECS in alibaba cloud(like aws ec2)

davidnuzik commented 4 years ago

Is this issue reproducible? What about on another platform? This should be working.

thetruechar commented 4 years ago

@davidnuzik can you tell me why this should be working? I found that the k3s are not listening on 80, 443, so how can this work?

davidnuzik commented 4 years ago

Based on our suite of tests against Ubuntu 18.04 and CentOS7 this should work. I would review firewall rules, etc. You mentioned aws ec2 instances -- the security group has been set up correctly, etc?

thetruechar commented 4 years ago

I installed k3s again with no-traefik option, and install nginx-ingress helm with nodeports 30080 and 30443.

nginx-ingress-controller        LoadBalancer   10.43.101.91    172.16.55.78   80:30080/TCP,443:30443/TCP   9h

I found that k3s-serve is listening on port 30080 but not 80.

root@testing-k3s-master:~# lsof -i :30080
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
k3s-serve 2975 root  255u  IPv6  35407      0t0  TCP *:30080 (LISTEN)

But it still can visit the ingress by port 80, so how the k3s achieve this? by iptables?

iam-TJ commented 4 years ago

Confirming this too, the external interface is listening on the random port number not the servce port number.

$ kubectl get svc --namespace=kube_system
NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
kube-dns             ClusterIP      10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP       19h
metrics-server       ClusterIP      10.43.56.99    <none>        443/TCP                      19h
traefik-prometheus   ClusterIP      10.43.218.74   <none>        9100/TCP                     19h
traefik              LoadBalancer   10.43.2.147    10.127.0.1    80:30046/TCP,443:30259/TCP   19h

On the master:

# ss -tnlp sport = 443
State           Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port           Process           
root@elloe01:~# ss -tnlp sport = 30259
State            Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port          Process           
LISTEN           0                4096                                   *:30259                                *:*              users:(("k3s-server",pid=16866,fd=284))

On one of the workers:

# ss -tnlp sport = 443
State           Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port           Process           
root@innovation00:~# ss -tnlp sport = 30259
State            Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port          Process           
LISTEN           0                4096                                   *:30259                                *:*              users:(("k3s-agent",pid=27726,fd=179))
brandond commented 4 years ago

Yes, this is how kubernetes (specifically kube-proxy) works. The container listens on a random node port, and the control plane uses iptables rules to masquerade traffic from the loadbalancer address and port to the appropriate node port.

brandond@seago:~$ kubectl get svc --namespace=traefik
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                     AGE
traefik   LoadBalancer   10.43.25.120   10.0.3.80     80:31417/TCP,443:31119/TCP,9000:31462/TCP   57d
brandond@seago:~$ sudo iptables -vnL -t nat | grep traefik/traefik:websecure
    0     0 KUBE-XLB-LODJXQNF3DWSNB7B  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* traefik/traefik:websecure loadbalancer IP */
    0     0 KUBE-XLB-LODJXQNF3DWSNB7B  all  --  *      *       10.0.3.80            0.0.0.0/0            /* traefik/traefik:websecure loadbalancer IP */
    0     0 KUBE-MARK-DROP  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* traefik/traefik:websecure loadbalancer IP */
    0     0 KUBE-MARK-MASQ  tcp  --  *      *       127.0.0.0/8          0.0.0.0/0            /* traefik/traefik:websecure */ tcp dpt:31119
    0     0 KUBE-XLB-LODJXQNF3DWSNB7B  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* traefik/traefik:websecure */ tcp dpt:31119
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.42.0.0/16         10.43.25.120         /* traefik/traefik:websecure cluster IP */ tcp dpt:443
    0     0 KUBE-SVC-LODJXQNF3DWSNB7B  tcp  --  *      *       0.0.0.0/0            10.43.25.120         /* traefik/traefik:websecure cluster IP */ tcp dpt:443
    0     0 KUBE-FW-LODJXQNF3DWSNB7B  tcp  --  *      *       0.0.0.0/0            10.0.3.80            /* traefik/traefik:websecure loadbalancer IP */ tcp dpt:443
    0     0 KUBE-MARK-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* masquerade LOCAL traffic for traefik/traefik:websecure LB IP */ ADDRTYPE match src-type LOCAL
    0     0 KUBE-SVC-LODJXQNF3DWSNB7B  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* route LOCAL traffic for traefik/traefik:websecure LB IP to service chain */ ADDRTYPE match src-type LOCAL
    0     0 KUBE-MARK-DROP  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* traefik/traefik:websecure has no local endpoints */

If this isn't working for you, then you've probably got something wrong with your iptables configuration - such as running on a distro that uses iptables-nft and not installing the iptables-legacy tools.

iam-TJ commented 4 years ago

In our case we're based on Ubuntu 20.04 and using the 'legacy' iptables and the various KUBE* chains and rules are in place. Despite all that its unclear why exposed services cannot be reached from the public IP addresses of the workers.

I suspect the original reporter on this issue has the same problem as we're seeing but came to the same conclusion as we did when the expected behaviour wasn't observed.

Like the original reporter we can only connect to the exposed services via the random port numbers from outside the cluster not the well-known service ports.

iam-TJ commented 4 years ago

I think I've figured out our issue.

Our master(s) are deployed on the office LAN. Workers are in remote data-centres. Because the cluster needs to be on its own sub-net to avoid PNAT/routing issues we've created a Wireguard VPN that the cluster uses on 10.127.0.0/16 with the master on 10.127.0.1.

On the master we can attach to traefik using HTTP (tested using telnet) but from the workers that fails (strange since the workers can reach the master via the 10.127.0.0/16 sub-net).

# root@innovation00:~# nmap 10.127.0.1
Starting Nmap 7.80 ( https://nmap.org ) at 2020-03-27 07:44 UTC
Nmap scan report for elloe01.k3s (10.127.0.1)
Host is up (0.027s latency).
Not shown: 997 closed ports
PORT    STATE    SERVICE
22/tcp  open     ssh
80/tcp  filtered http
443/tcp filtered https

However, it is clear our issue is to do with our 'IoT' edge network requirements rather than a problem with traefik.

imba-tjd commented 4 years ago

I'm a new learner and I don't quite understand. I have previously set nginx listening on 80 and 443. Now I'm trying to install k3s. Does The Traefik ingress controller will use ports 80, 443, and 8080 on the host mean they are not compatible? It's really puzzling. Neither of them gave an error. curl 127.0.0.1 simply stucked. ss -tlnp showed nginx is listening on 0.0.0.0:80 normally. curl <myip> still returned a response even after systemctl stop k3s. All of them persisted only after reboot, kill and restart nginx or k3s wouldn't help.

JasLin commented 4 years ago

if you run traefik with root, it can bind 80 and 443 if you want to run traefik container with non root user, and bind to 80, it's not easy to do that. see this issue

magixus commented 3 years ago

have the same issue on fresh install (kubeadm way) on ubuntu 20.04.1 LTS

traefik is using normal ports 80, 443, 8080 forwarded to random ports image

hosts are (very well configured, no mistake) image

access normal ports from inside doesn't work image

those normal ports are closed from outside image

iptables modules and configuration are well loaded (no mistake): image

even though everything is set correctly, i can't access traefik ports on my cluster. hope community could give us a clue on that it has nothing to do with k8s i think it has something to do with iptables

fox-md commented 3 years ago

Try to setup hostPort for web and websecure ports. That will create dnat rules in iptables.

magixus commented 3 years ago

Try to setup hostPort for web and websecure ports. That will create dnat rules in iptables.

@fox-md you can't simply setup hostPort... am using a service with NodePort, setting hostport need to be in pods not in service ... and it's not working in both (i've tried)

Thank you.

fox-md commented 3 years ago

@fox-md you can't simply setup hostPort... am using a service with NodePort, setting hostport need to be in pods not in service ... and it's not working in both (i've tried)

Thank you.

Hi @magixus, My understanding is that putting hostPort into the picture creates dnat rules that help requests against ports 80 and 443 reach ingress pod. [root@kubeworker01 ~]# iptables-save | grep "CNI-DN" | grep "to-destination" -A CNI-DN-051e5bdafb630d2c22b59 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.36.0.4:8000 -A CNI-DN-051e5bdafb630d2c22b59 -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.36.0.4:8443 Without having hostPort, I do not quite understand what would make iptables create nat rules to route requests to ingress-controller service.

magixus commented 3 years ago

Hi @magixus, My understanding is that putting hostPort into the picture creates dnat rules that help requests against ports 80 and 443 reach ingress pod. [root@kubeworker01 ~]# iptables-save | grep "CNI-DN" | grep "to-destination" -A CNI-DN-051e5bdafb630d2c22b59 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.36.0.4:8000 -A CNI-DN-051e5bdafb630d2c22b59 -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.36.0.4:8443 Without having hostPort, I do not quite understand what would make iptables create nat rules to route requests to ingress-controller service.

To be honest I didn't understand your replay, but I can tell that I've tried setting hostPort as well a long with my configurations and it didn't work. no DNAT routing has been created unfortunately. any way I saved my journey with a startup script as following

#!/bin/bash

#sleep 2m
TRAEFIK_IP=$(kubectl get pods -n kube-system -o wide | grep traefik | awk '{print $6}')

# check IP PREROUTING 
PREROUTING_IP=$(iptables -t nat -vnL PREROUTING --line-numbers | sed '/^num\|^$\|^Chain/d' | wc -l)
if [ "$PREROUTING_IP" == 4 ] ; then
    # update IP DNAT prerouting rules
    iptables -R PREROUTING 3 -t nat -i ens160 -p tcp --dport 80 -j DNAT --to $TRAEFIK_IP:80
    iptables -R PREROUTING 4 -t nat -i ens160 -p tcp --dport 443 -j DNAT --to $TRAEFIK_IP:443
elif [ "$PREROUTING_IP" == 2 ]; then 
    # create DNAT prerouting rules if they don't existe
    iptables -A PREROUTING -t nat -i ens160 -p tcp --dport 80 -j DNAT --to $TRAEFIK_IP:80
    iptables -A PREROUTING -t nat -i ens160 -p tcp --dport 443 -j DNAT --to $TRAEFIK_IP:443
fi

# check IP FORWARD 
FORWARD_IP=$(iptables -vnL FORWARD --line-numbers | sed '/^num\|^$\|^Chain/d' | wc -l)
if [ "$FORWARD_IP" == 12 ] ; then
    # update IP DNAT FORWARD rules
    iptables -R FORWARD 11 -p tcp -d $TRAEFIK_IP --dport 80 -j ACCEPT
    iptables -R FORWARD 12 -p tcp -d $TRAEFIK_IP --dport 443 -j ACCEPT
elif [ "$FORWARD_IP" == 10 ]; then 
    # create DNAT FORWARD rules if they don't existe
    iptables -A FORWARD -p tcp -d $TRAEFIK_IP --dport 80 -j ACCEPT
    iptables -A FORWARD -p tcp -d $TRAEFIK_IP --dport 443 -j ACCEPT
fi

What the script is doing is basically checking any PREROUTING and FORWARD rules and update or create them accordingly.

magixus commented 3 years ago

guys, found a very interesting workaround, setting traefik service spec type to : LoadBalancer

apiVersion: v1
kind: Service
metadata:
  name: traefik
  namespace: kube-system
spec:
  type: LoadBalancer

Then apply the config

if it show pending for a long time like this image just patch the service with this following command and you'll be ok : kubectl patch svc traefik -n kube-system -p '{"spec":{"externalIPs":["<VPS-or-VM-local-IP@"}}' you should see that SVC is getting the external IP like this image

and you look back to your system, you 'll find on master node services being exposed : image

idealtech-i3dlabs commented 3 years ago

OMG --- This fixed it for me!!!

guys, found a very interesting workaround, setting traefik service spec type to : LoadBalancer

apiVersion: v1
kind: Service
metadata:
  name: traefik
  namespace: kube-system
spec:
  type: LoadBalancer

Then apply the config

if it show pending for a long time like this image just patch the service with this following command and you'll be ok : kubectl patch svc traefik -n kube-system -p '{"spec":{"externalIPs":["<VPS-or-VM-local-IP@"}}' you should see that SVC is getting the external IP like this image

and you look back to your system, you 'll find on master node services being exposed : image

pojntfx commented 3 years ago

This workaround works for me as well (clean install of Debian 10; also tested on a clean install of Fedora Server 33):

root@jakob-lenovog710:~# kubectl patch svc traefik -n kube-system -p '{"spec":{"externalIPs":["192.168.178.54"]}}'
service/traefik patched
root@jakob-lenovog710:~# 
root@jakob-lenovog710:~# ss -tlnp
State   Recv-Q   Send-Q      Local Address:Port      Peer Address:Port                                          
LISTEN  0        128               0.0.0.0:32004          0.0.0.0:*      users:(("k3s-server",pid=572,fd=247))  
LISTEN  0        128             127.0.0.1:10248          0.0.0.0:*      users:(("k3s-server",pid=572,fd=237))  
LISTEN  0        128             127.0.0.1:10249          0.0.0.0:*      users:(("k3s-server",pid=572,fd=197))  
LISTEN  0        128             127.0.0.1:10251          0.0.0.0:*      users:(("k3s-server",pid=572,fd=196))  
LISTEN  0        128             127.0.0.1:10252          0.0.0.0:*      users:(("k3s-server",pid=572,fd=203))  
LISTEN  0        128             127.0.0.1:6444           0.0.0.0:*      users:(("k3s-server",pid=572,fd=16))   
LISTEN  0        128               0.0.0.0:30575          0.0.0.0:*      users:(("k3s-server",pid=572,fd=250))  
LISTEN  0        128        192.168.178.54:80             0.0.0.0:*      users:(("k3s-server",pid=572,fd=208))  
LISTEN  0        128             127.0.0.1:10256          0.0.0.0:*      users:(("k3s-server",pid=572,fd=199))  
LISTEN  0        128               0.0.0.0:22             0.0.0.0:*      users:(("sshd",pid=617,fd=3))          
LISTEN  0        128             127.0.0.1:10010          0.0.0.0:*      users:(("containerd",pid=632,fd=16))   
LISTEN  0        128        192.168.178.54:443            0.0.0.0:*      users:(("k3s-server",pid=572,fd=316))  
LISTEN  0        128                     *:10250                *:*      users:(("k3s-server",pid=572,fd=239))  
LISTEN  0        128                     *:6443                 *:*      users:(("k3s-server",pid=572,fd=7))    
LISTEN  0        128                  [::]:22                [::]:*      users:(("sshd",pid=617,fd=4))          
root@jakob-lenovog710:~# kubectl get svc -n kube-system
NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP                     PORT(S)                      AGE
kube-dns             ClusterIP      10.43.0.10     <none>                          53/UDP,53/TCP,9153/TCP       9m24s
metrics-server       ClusterIP      10.43.93.188   <none>                          443/TCP                      9m23s
traefik-prometheus   ClusterIP      10.43.208.88   <none>                          9100/TCP                     8m13s
traefik              LoadBalancer   10.43.57.101   192.168.178.54,192.168.178.54   80:32004/TCP,443:30575/TCP   8m13s
brandond commented 3 years ago

You shouldn't need to do that - creating the LB pods and then setting the externalIP field based on the addresses of the nodes running the pods is exactly what the servicelb controller is supposed to do. See how you've got the same IP listed twice now?

Have you checked on the svclb pods to ensure that they're running properly?

pojntfx commented 3 years ago

Exactly; the pods all run properly, but if the IP has been added twice it works. svclb pods run fine, both before and after adding the IP a second time. It just doesn't seem to bind() on 80/443 but rather two random (but same between restarts) node ports ...

brandond commented 3 years ago

Yes, that is how it works. It doesn't bind to the node port directly. It binds to a random port in the pod, and the ServiceLB pod programs iptables rules to forward the traffic from the host port to the pod port. It is not expected that you will see it actually listening on the host ports.

pojntfx commented 3 years ago

@brandond Thanks for the insight! I naively assumed that they would just bind so I evaluated with iproute2's ss and didn't even test reachability from the outside. Works as expected now:

# Test VM@DO
root@debian-s-1vcpu-1gb-ams3-01:~# curl -sfL https://get.k3s.io | sh -
root@debian-s-1vcpu-1gb-ams3-01:~# k3s kubectl get pod -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   helm-install-traefik-9d7bl                0/1     ContainerCreating   0          20s
kube-system   local-path-provisioner-5ff76fc89d-slrbk   0/1     ContainerCreating   0          19s
kube-system   metrics-server-86cbb8457f-4vx7s           1/1     Running             0          19s
kube-system   coredns-854c77959c-92p5b                  0/1     Running             0          19s
# Local Linux system
[pojntfx@felixs-xps13 ~]$ curl 188.166.84.48
curl: (7) Failed to connect to 188.166.84.48 port 80: Connection refused
# Test VM@DO
root@debian-s-1vcpu-1gb-ams3-01:~# k3s kubectl get pod -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   metrics-server-86cbb8457f-4vx7s           1/1     Running     0          40s
kube-system   local-path-provisioner-5ff76fc89d-slrbk   1/1     Running     0          40s
kube-system   coredns-854c77959c-92p5b                  1/1     Running     0          40s
kube-system   helm-install-traefik-9d7bl                0/1     Completed   0          41s
kube-system   traefik-6f9cbd9bd4-dmtss                  0/1     Running     0          17s
kube-system   svclb-traefik-9n4rv                       2/2     Running     0          17s
# Local Linux system
[pojntfx@felixs-xps13 ~]$ curl 188.166.84.48
404 page not found
stale[bot] commented 3 years ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

kerberjg commented 3 years ago

I think I was able to figure it out, at least if Calico is involved. It essentially comes down to this: https://github.com/projectcalico/calico/issues/4842

Basically, Calico disables IP forwarding, preventing svclb from functioning the way @brandond mentions. Since they fail to set up forwarding, they get stuck in a CrashLoopBackOff state.

My solution:

  1. Download https://docs.projectcalico.org/master/manifests/custom-resources.yaml
  2. Add the following lines:
    spec:
    calicoNetwork:
    containerIPForwarding: Enabled
  3. Apply your custom-resources.yaml to the tigera-operator namespace
  4. Use @magixus 's external IP patch for good measure (probably not necessary) -> https://github.com/k3s-io/k3s/issues/1414#issuecomment-770038893
  5. In the kube-system namespace remove pods starting with svclb so they can get recreated
  6. Profit!

You will not see the port listening in ss -tlnp because, as @brandond said, it just forwards from 80/443 to the random NodePorts using iptables. If everything else is configured well and you attempt to connect to your cluster's external IP on ports 80/443 you should see your service :)

stale[bot] commented 2 years ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

buzanits commented 2 years ago

The trick with kubectl patch svc traefik -n kube-system -p '{"spec":{"externalIPs":["192.168.178.54"]}}' did it also for me (of course with a different IP). I think the main problem is, that I do not have a pod svclb-traefik-*. I installed traefik with helm. I also tried to reinstall it. But this pod never appeared.

WowMuchName commented 1 year ago

It seems this issue still exists. After performing an update on Ubuntu 22 today networking was broken for me. I had an FTP server using node port and several services behind the build-in traefik ingress. None of it worked anymore showing the symptoms outlined in this issue.

The solutions outlined in this thread did not resolve the problem. I needed to uninstall k3s via the script, which returned the IP-Tables rules back to defaults, then reinstall k3s. Now it works again. Not the biggest of deals since I had everything encapsulated as helm charts and only run k3s on a workstation. Makes me hesitant to use k3s for production workloads though.

Would it be possible to introduce a self-healing mechanism that examines and repairs network related configuration like ip-tables upon server start? Or maybe a script / commandline switch that does this on demand without having to loose all deployed workloads?

sdekna commented 1 year ago

Ubuntu 22

I have the same issue... tried uninstalling using the provided script and reinstalling again, but doesn't seem to work. Traefik listens in a random port still.

VladoPortos commented 1 year ago

Same issue I guess:

kube-system traefik LoadBalancer 10.43.15.188 10.201.60.25 80:30307/TCP,443:31074/TCP 29d

Where the IP of the server is 10.201.60.25

The service traefik is set like this:

spec:                                              
  allocateLoadBalancerNodePorts: true              
  clusterIP: 10.43.15.188                          
  clusterIPs:                              
  - 10.43.15.188                           
  externalTrafficPolicy: Cluster           
  internalTrafficPolicy: Cluster           
  ipFamilies:                              
  - IPv4                                   
  ipFamilyPolicy: PreferDualStack          
  ports:                             
  - name: web                        
    nodePort: 30307              
    port: 80                     
    protocol: TCP                
    targetPort: web              
  - name: websecure              
    nodePort: 31074              
    port: 443                    
    protocol: TCP                
    targetPort: websecure
  selector:              
    app.kubernetes.io/instance: traefik-kube-system
    app.kubernetes.io/name: traefik                
  sessionAffinity: None                            
  type: LoadBalancer                               
status:                                            
  loadBalancer:                                    
    ingress:                                       
    - ip: 10.201.60.25 

There are no IngressRoute setup except the default one:

[root@euc1-awx1 ingress]# kubectl get IngressRoute --all-namespaces NAMESPACE NAME AGE kube-system traefik-dashboard 29d

[root@euc1-awx1 ingress]# curl https://10.201.60.25:31074 curl: (56) Received HTTP code 403 from proxy after CONNECT

curl https://10.201.60.25:3307 I get html with Access Denied.

curl http://10.201.60.25:80 or 443 https Just time out

brandond commented 1 year ago

I am going to lock this, as there seems to be ongoing confusion in this thread about how ports work in Kubernetes.

  1. The traefik process listens on the configured ports within the pod, not on the host itself. There are exceptions to this rule if the pod uses host network, but traefik does not.
  2. The traefik service is assigned random node ports that pass traffic from the service to the pods.
  3. The load-balancer controller pods (svclb) use hostPorts to occupy the requested service ports (80/443), and add iptables rules to forward traffic from the service ports on the host, to the service, and then on to the pods.

If anyone has questions about configuring traefik ingress resources, please check the Traefik Community Forums, or open a new discussion thread.