ovn-kubernetes / ovn-kubernetes

A robust Kubernetes networking platform
https://ovn-kubernetes.io/
Apache License 2.0
844 stars 352 forks source link

NodePort service is not accessible #636

Closed ylhyh closed 4 years ago

ylhyh commented 5 years ago

I have create a NodePort service "nginx-test" listening on port 80, see the service list:

# kubectl get svc --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP         4d
default       nginx-test   NodePort    10.105.116.76   <none>        80:80/TCP       2d1h
kube-system   kube-dns     ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   4d

Kubernetes cluster rough info:

NODE            PHYSICAL IP
master1        172.16.126.202
master2        172.16.126.203
master3        172.16.126.204
node001       172.16.126.208
node002       172.16.126.209
...                   ...
node008       172.16.126.215

I can access it through the cluster ip -> http://10.105.116.76:80:

# curl -i http://10.105.116.76:80
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Sat, 02 Mar 2019 18:53:44 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 31 Jan 2019 23:37:45 GMT
Connection: keep-alive
ETag: "5c5386c9-264"
Accept-Ranges: bytes
...

But cannot access it through any physical ip of minion node (from outside the respective machine):

# curl -i http://172.16.126.209:80
curl: (7) Failed connect to 172.16.126.209:80; Connection timed out

The pod of service "nginx-test" is scheduled on node002 which ip address is 172.16.126.209.

Port listening on every node:

# netstat -tpln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      4223/sshd           
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      3679/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      5543/kube-proxy     
tcp        0      0 127.0.0.1:43177         0.0.0.0:*               LISTEN      3679/kubelet        
tcp6       0      0 :::22                   :::*                    LISTEN      4223/sshd           
tcp6       0      0 :::10250                :::*                    LISTEN      3679/kubelet        
tcp6       0      0 :::80                   :::*                    LISTEN      5543/kube-proxy     
tcp6       0      0 :::10256                :::*                    LISTEN      5543/kube-proxy

The parameters of ovnkube on every minion node, node001 as an example: /usr/bin/ovnkube --init-node=node001 --init-gateways --nodeport --cluster-subnet=10.112.0.0/12 --service-cluster-ip-range=10.96.0.0/12 --config-file=/etc/openvswitch/ovn_k8s.conf

Load Balancers:

# ovn-nbctl lb-list
UUID                                    LB                  PROTO      VIP                 IPs
f5315290-f4e2-40e2-866d-015ca1f99cbd                        udp        10.96.0.10:53       10.112.0.3:53,10.112.1.3:53,10.112.2.3:53
5eb463e8-cc95-445c-a5e0-d78fad2a9be3                        tcp        10.105.116.76:80    10.112.4.4:80
                                                            tcp        10.96.0.10:53       10.112.0.3:53,10.112.1.3:53,10.112.2.3:53
                                                            tcp        10.96.0.1:443       172.16.126.202:6443,172.16.126.203:6443,172.16.126.204:6443

I have enabled ipvs for kube-proxy, see the service/real-server mapping of ipvs below, the pod ip is 10.112.4.4, scheduled on minion node which physical ip is 172.16.126.209:

# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.126.209:80 rr
  -> 10.112.4.4:80                Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 172.16.126.202:6443          Masq    1      0          0         
  -> 172.16.126.203:6443          Masq    1      0          0         
  -> 172.16.126.204:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.112.0.3:53                Masq    1      0          0         
  -> 10.112.1.3:53                Masq    1      0          0         
  -> 10.112.2.3:53                Masq    1      0          0         
TCP  10.105.116.76:80 rr
  -> 10.112.4.4:80                Masq    1      0          0         
TCP  10.112.4.2:80 rr
  -> 10.112.4.4:80                Masq    1      0          0         
TCP  127.0.0.1:80 rr
  -> 10.112.4.4:80                Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.112.0.3:53                Masq    1      0          0         
  -> 10.112.1.3:53                Masq    1      0          0         
  -> 10.112.2.3:53                Masq    1      0          0

ipvs mapping on other minion nodes (172.16.126.208 as example):

# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.126.208:80 rr
TCP  10.96.0.1:443 rr
  -> 172.16.126.202:6443          Masq    1      0          0         
  -> 172.16.126.203:6443          Masq    1      0          0         
  -> 172.16.126.204:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.112.0.3:53                Masq    1      0          0         
  -> 10.112.1.3:53                Masq    1      0          0         
  -> 10.112.2.3:53                Masq    1      0          0         
TCP  10.105.116.76:80 rr
  -> 10.112.4.4:80                Masq    1      0          0         
TCP  10.112.3.2:80 rr
TCP  127.0.0.1:80 rr
UDP  10.96.0.10:53 rr
  -> 10.112.0.3:53                Masq    1      0          0         
  -> 10.112.1.3:53                Masq    1      0          0         
  -> 10.112.2.3:53                Masq    1      0          0

dump-flows it seems correct:

# ovs-ofctl dump-flows breth0
 cookie=0x0, duration=7616.030s, table=0, n_packets=0, n_bytes=0, priority=100,ip,in_port="k8s-patch-breth" actions=ct(commit,zone=64000),output:eth0
 cookie=0x0, duration=7616.010s, table=0, n_packets=197515, n_bytes=59340278, priority=50,ip,in_port=eth0 actions=ct(table=1,zone=64000)
 cookie=0x0, duration=7615.871s, table=0, n_packets=5036, n_bytes=372664, priority=100,tcp,in_port=eth0,tp_dst=80 actions=output:"k8s-patch-breth"
 cookie=0x0, duration=7621.075s, table=0, n_packets=321302, n_bytes=26439356, priority=0 actions=NORMAL
 cookie=0x0, duration=7615.994s, table=1, n_packets=0, n_bytes=0, priority=100,ct_state=+est+trk actions=output:"k8s-patch-breth"
 cookie=0x0, duration=7615.968s, table=1, n_packets=0, n_bytes=0, priority=100,ct_state=+rel+trk actions=output:"k8s-patch-breth"
 cookie=0x0, duration=7615.954s, table=1, n_packets=197508, n_bytes=59246320, priority=0 actions=LOCAL

module openvswitch is loadded

# lsmod | grep openvswitch
openvswitch           131072  7 vport_geneve
nsh                    16384  1 openvswitch
nf_nat_ipv6            20480  1 openvswitch
nf_nat_ipv4            16384  3 ipt_MASQUERADE,openvswitch,iptable_nat
nf_conncount           24576  1 openvswitch
nf_nat                 36864  3 nf_nat_ipv6,nf_nat_ipv4,openvswitch
nf_conntrack          143360  9 xt_conntrack,nf_nat,nf_nat_ipv6,ipt_MASQUERADE,nf_nat_ipv4,openvswitch,nf_conntrack_netlink,nf_conncount,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,openvswitch
libcrc32c              16384  5 nf_conntrack,nf_nat,openvswitch,xfs,ip_vs
# modinfo openvswitch
filename:       /lib/modules/4.20.5-1.el7.elrepo.x86_64/kernel/net/openvswitch/openvswitch.ko
alias:          net-pf-16-proto-16-family-ovs_ct_limit
alias:          net-pf-16-proto-16-family-ovs_meter
alias:          net-pf-16-proto-16-family-ovs_packet
alias:          net-pf-16-proto-16-family-ovs_flow
alias:          net-pf-16-proto-16-family-ovs_vport
alias:          net-pf-16-proto-16-family-ovs_datapath
license:        GPL
description:    Open vSwitch switching datapath
srcversion:     23AB11CC772ECA25E9A68AE
depends:        nf_conntrack,nf_nat,nf_conncount,libcrc32c,nf_nat_ipv6,nf_nat_ipv4,nf_defrag_ipv6,nsh
retpoline:      Y
intree:         Y
name:           openvswitch
vermagic:       4.20.5-1.el7.elrepo.x86_64 SMP mod_unload modversions 
# uname -a
Linux node001 4.20.5-1.el7.elrepo.x86_64 #1 SMP Sat Jan 26 10:55:51 EST 2019 x86_64 x86_64 x86_64 GNU/Linux

Version of Open vSwitch:

# ovs-appctl version
ovs-vswitchd (Open vSwitch) 2.10.1

Northbound database:

# ovn-nbctl show
switch 1bd13c40-e7ec-4844-aaf7-53ec0be482fa (ext_master2)
    port etor-GR_master2
        type: router
        addresses: ["32:56:c1:1e:fd:44"]
        router-port: rtoe-GR_master2
    port br-localnet_master2
        addresses: ["unknown"]
switch 99f97b71-5545-4411-be33-ff772ea7c251 (node003)
    port stor-node003
        type: router
        addresses: ["00:00:00:A6:08:27"]
        router-port: rtos-node003
    port k8s-node003
        addresses: ["76:07:89:4d:93:49 10.112.5.2"]
    port default_busybox-g8l9n
        addresses: ["0a:00:00:00:00:19 10.112.5.3"]
switch 4d19b592-7716-4d0a-8515-8af393b3ab6a (node005)
    port stor-node005
        type: router
        addresses: ["00:00:00:09:7A:04"]
        router-port: rtos-node005
    port default_busybox-kp7nt
        addresses: ["0a:00:00:00:00:1b 10.112.7.3"]
    port k8s-node005
        addresses: ["92:f9:d4:b0:da:4b 10.112.7.2"]
switch 635f0dc2-1412-4558-a6e7-57ab8d5b3703 (ext_node004)
    port etor-GR_node004
        type: router
        addresses: ["00:1d:d8:b7:1c:0e"]
        router-port: rtoe-GR_node004
    port breth0_node004
        addresses: ["unknown"]
switch d87fc4c2-41ef-4195-ac16-998962ebfc93 (master1)
    port stor-master1
        type: router
        addresses: ["00:00:00:BF:24:EE"]
        router-port: rtos-master1
    port kube-system_coredns-s5fxx
        addresses: ["0a:00:00:00:00:10 10.112.0.3"]
    port k8s-master1
        addresses: ["f6:94:d6:5c:65:01 10.112.0.2"]
    port default_busybox-fqvpj
        addresses: ["0a:00:00:00:00:1f 10.112.0.4"]
switch e1fb1473-f3aa-4220-ae45-14757c99bc36 (node002)
    port stor-node002
        type: router
        addresses: ["00:00:00:0D:A7:8C"]
        router-port: rtos-node002
    port default_busybox-fp82w
        addresses: ["0a:00:00:00:00:1e 10.112.4.3"]
    port k8s-node002
        addresses: ["22:9b:bf:19:94:fc 10.112.4.2"]
    port default_nginx-test-6489dfd864-sqcvl
        addresses: ["0a:00:00:00:00:0e 10.112.4.4"]
switch ef1b3093-199e-4350-a701-024a4e46df6c (ext_node006)
    port etor-GR_node006
        type: router
        addresses: ["00:1d:d8:b7:1c:10"]
        router-port: rtoe-GR_node006
    port breth0_node006
        addresses: ["unknown"]
switch dfac86c3-661f-4a14-89e4-c64e0d5d263a (node001)
    port k8s-node001
        addresses: ["3e:ee:84:10:ed:ab 10.112.3.2"]
    port stor-node001
        type: router
        addresses: ["00:00:00:AC:D7:4C"]
        router-port: rtos-node001
    port default_busybox-fjbjj
        addresses: ["0a:00:00:00:00:20 10.112.3.3"]
switch 4555adc4-6990-4806-876f-b9c24567e527 (ext_node007)
    port breth0_node007
        addresses: ["unknown"]
    port etor-GR_node007
        type: router
        addresses: ["00:1d:d8:b7:1c:11"]
        router-port: rtoe-GR_node007
switch 6d35cfb4-5152-404b-804a-03c00788e6e1 (node006)
    port k8s-node006
        addresses: ["82:98:25:84:12:25 10.112.8.2"]
    port default_busybox-hbxp9
        addresses: ["0a:00:00:00:00:1c 10.112.8.3"]
    port stor-node006
        type: router
        addresses: ["00:00:00:AD:FE:E7"]
        router-port: rtos-node006
switch d9f5f7f5-a982-4a82-a8bd-da6ec5a01c18 (ext_master3)
    port br-localnet_master3
        addresses: ["unknown"]
    port etor-GR_master3
        type: router
        addresses: ["3a:d6:a2:d6:b9:42"]
        router-port: rtoe-GR_master3
switch c590c876-767c-41cf-a35e-ca5aa183bfe0 (ext_node003)
    port etor-GR_node003
        type: router
        addresses: ["00:1d:d8:b7:1c:0d"]
        router-port: rtoe-GR_node003
    port breth0_node003
        addresses: ["unknown"]
switch a53504e6-0704-4221-a2fb-85ffccd34537 (join)
    port jtor-GR_node006
        type: router
        addresses: ["00:00:00:1C:7D:27"]
        router-port: rtoj-GR_node006
    port jtor-GR_node005
        type: router
        addresses: ["00:00:00:76:EB:3F"]
        router-port: rtoj-GR_node005
    port jtor-GR_node007
        type: router
        addresses: ["00:00:00:B8:72:11"]
        router-port: rtoj-GR_node007
    port jtor-GR_master1
        type: router
        addresses: ["00:00:00:FE:FB:9F"]
        router-port: rtoj-GR_master1
    port jtor-GR_master3
        type: router
        addresses: ["00:00:00:BB:2E:13"]
        router-port: rtoj-GR_master3
    port jtor-ovn_cluster_router
        type: router
        addresses: ["00:00:00:3D:3B:6B"]
        router-port: rtoj-ovn_cluster_router
    port jtor-GR_node004
        type: router
        addresses: ["00:00:00:BB:AE:54"]
        router-port: rtoj-GR_node004
    port jtor-GR_node001
        type: router
        addresses: ["00:00:00:AC:F2:C7"]
        router-port: rtoj-GR_node001
    port jtor-GR_node002
        type: router
        addresses: ["00:00:00:8F:28:28"]
        router-port: rtoj-GR_node002
    port jtor-GR_node003
        type: router
        addresses: ["00:00:00:BB:50:0F"]
        router-port: rtoj-GR_node003
    port jtor-GR_master2
        type: router
        addresses: ["00:00:00:16:67:B9"]
        router-port: rtoj-GR_master2
    port jtor-GR_node008
        type: router
        addresses: ["00:00:00:88:56:AD"]
        router-port: rtoj-GR_node008
switch 6904ae77-492a-4ba9-967c-743e94fc4c51 (ext_node002)
    port etor-GR_node002
        type: router
        addresses: ["00:1d:d8:b7:1c:0c"]
        router-port: rtoe-GR_node002
    port breth0_node002
        addresses: ["unknown"]
switch 8a7ec017-1e5c-497c-84a5-87fbe280b5cf (master3)
    port kube-system_coredns-52zkg
        addresses: ["0a:00:00:00:00:15 10.112.2.3"]
    port default_busybox-9tcsr
        addresses: ["0a:00:00:00:00:16 10.112.2.4"]
    port stor-master3
        type: router
        addresses: ["00:00:00:70:63:94"]
        router-port: rtos-master3
    port k8s-master3
        addresses: ["62:bc:ad:44:2a:ab 10.112.2.2"]
switch 506aad59-f535-49ed-9315-d3ab74086ff2 (node007)
    port default_busybox-hcfb7
        addresses: ["0a:00:00:00:00:1d 10.112.9.3"]
    port k8s-node007
        addresses: ["82:79:77:24:85:de 10.112.9.2"]
    port stor-node007
        type: router
        addresses: ["00:00:00:69:92:FA"]
        router-port: rtos-node007
switch 0e4d5919-fffa-4f63-ab5b-04c8cad37526 (ext_master1)
    port br-localnet_master1
        addresses: ["unknown"]
    port etor-GR_master1
        type: router
        addresses: ["e2:54:54:a3:36:41"]
        router-port: rtoe-GR_master1
switch d243c3a9-219d-4a49-9b5e-d9e6ae3f00d7 (node004)
    port stor-node004
        type: router
        addresses: ["00:00:00:16:F2:4B"]
        router-port: rtos-node004
    port k8s-node004
        addresses: ["7e:20:db:01:10:4d 10.112.6.2"]
    port default_busybox-ts5p8
        addresses: ["0a:00:00:00:00:18 10.112.6.3"]
switch d76f2b7c-366c-4066-80ab-aa7e112a2652 (node008)
    port default_busybox-wlpf2
        addresses: ["0a:00:00:00:00:1a 10.112.10.3"]
    port stor-node008
        type: router
        addresses: ["00:00:00:1B:25:DC"]
        router-port: rtos-node008
    port k8s-node008
        addresses: ["02:65:bb:66:c4:3b 10.112.10.2"]
switch 1bd18f1c-4285-4104-bd2b-b864870cc48b (ext_node001)
    port etor-GR_node001
        type: router
        addresses: ["00:1d:d8:b7:1c:0b"]
        router-port: rtoe-GR_node001
    port breth0_node001
        addresses: ["unknown"]
switch 8573c13e-4366-4712-8b74-4c36b65ab6a9 (ext_node005)
    port etor-GR_node005
        type: router
        addresses: ["00:1d:d8:b7:1c:0f"]
        router-port: rtoe-GR_node005
    port breth0_node005
        addresses: ["unknown"]
switch 48d3525e-c3d7-4932-bd31-6bd494b98ed3 (master2)
    port default_busybox-6s2p8
        addresses: ["0a:00:00:00:00:02 10.112.1.4"]
    port k8s-master2
        addresses: ["26:22:6b:1d:30:68 10.112.1.2"]
    port stor-master2
        type: router
        addresses: ["00:00:00:59:7C:A4"]
        router-port: rtos-master2
    port kube-system_coredns-shl6x
        addresses: ["0a:00:00:00:00:12 10.112.1.3"]
switch 7d1f8ccf-4014-4b2a-92e9-c29ea9e2f36c (ext_node008)
    port breth0_node008
        addresses: ["unknown"]
    port etor-GR_node008
        type: router
        addresses: ["00:1d:d8:b7:1c:12"]
        router-port: rtoe-GR_node008
router 22e18eb0-a4b2-4d53-aea7-0729fefe5287 (GR_node008)
    port rtoj-GR_node008
        mac: "00:00:00:88:56:AD"
        networks: ["100.64.1.6/24"]
    port rtoe-GR_node008
        mac: "00:1d:d8:b7:1c:12"
        networks: ["172.16.126.215/24"]
    nat 97b3c687-7fd6-4a98-87a6-f1aaae52936d
        external ip: "172.16.126.215"
        logical ip: "10.112.0.0/12"
        type: "snat"
router a06f75e1-9615-43f8-8a84-64f227d31d4a (GR_node007)
    port rtoj-GR_node007
        mac: "00:00:00:B8:72:11"
        networks: ["100.64.1.11/24"]
    port rtoe-GR_node007
        mac: "00:1d:d8:b7:1c:11"
        networks: ["172.16.126.214/24"]
    nat 98230eb7-f772-4cd9-8197-fd478028f688
        external ip: "172.16.126.214"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 03721222-6562-4dc3-98fb-ba83c8cdc433 (GR_node004)
    port rtoj-GR_node004
        mac: "00:00:00:BB:AE:54"
        networks: ["100.64.1.8/24"]
    port rtoe-GR_node004
        mac: "00:1d:d8:b7:1c:0e"
        networks: ["172.16.126.211/24"]
    nat 06da05e0-056f-4e01-8299-19545a35d7b6
        external ip: "172.16.126.211"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 435a4d1a-bc36-483e-935e-9d24488ffaf8 (GR_node005)
    port rtoe-GR_node005
        mac: "00:1d:d8:b7:1c:0f"
        networks: ["172.16.126.212/24"]
    port rtoj-GR_node005
        mac: "00:00:00:76:EB:3F"
        networks: ["100.64.1.9/24"]
    nat e447866d-d296-4987-ab4d-a35139963034
        external ip: "172.16.126.212"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 5dc98cfc-d348-4ae7-98b5-b91733d51f0e (ovn_cluster_router)
    port rtos-master1
        mac: "00:00:00:BF:24:EE"
        networks: ["10.112.0.1/24"]
    port rtos-node002
        mac: "00:00:00:0D:A7:8C"
        networks: ["10.112.4.1/24"]
    port rtos-master2
        mac: "00:00:00:59:7C:A4"
        networks: ["10.112.1.1/24"]
    port rtos-node008
        mac: "00:00:00:1B:25:DC"
        networks: ["10.112.10.1/24"]
    port rtos-master3
        mac: "00:00:00:70:63:94"
        networks: ["10.112.2.1/24"]
    port rtos-node005
        mac: "00:00:00:09:7A:04"
        networks: ["10.112.7.1/24"]
    port rtos-node001
        mac: "00:00:00:AC:D7:4C"
        networks: ["10.112.3.1/24"]
    port rtoj-ovn_cluster_router
        mac: "00:00:00:3D:3B:6B"
        networks: ["100.64.1.1/24"]
    port rtos-node007
        mac: "00:00:00:69:92:FA"
        networks: ["10.112.9.1/24"]
    port rtos-node006
        mac: "00:00:00:AD:FE:E7"
        networks: ["10.112.8.1/24"]
    port rtos-node004
        mac: "00:00:00:16:F2:4B"
        networks: ["10.112.6.1/24"]
    port rtos-node003
        mac: "00:00:00:A6:08:27"
        networks: ["10.112.5.1/24"]
router dfc3c545-42a3-45c2-8190-bb3e115536ce (GR_node002)
    port rtoe-GR_node002
        mac: "00:1d:d8:b7:1c:0c"
        networks: ["172.16.126.209/24"]
    port rtoj-GR_node002
        mac: "00:00:00:8F:28:28"
        networks: ["100.64.1.5/24"]
    nat b68869c3-5bac-4fd9-a352-0199f0f4709f
        external ip: "172.16.126.209"
        logical ip: "10.112.0.0/12"
        type: "snat"
router aa04bf0e-9462-414f-98aa-2a0049246778 (GR_master1)
    port rtoe-GR_master1
        mac: "e2:54:54:a3:36:41"
        networks: ["169.254.33.2/24"]
    port rtoj-GR_master1
        mac: "00:00:00:FE:FB:9F"
        networks: ["100.64.1.2/24"]
    nat 67fe3714-11ff-4cd0-8b6a-dd70853a0fc7
        external ip: "169.254.33.2"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 936ea4ce-08fd-4c1c-89ab-00406698f9db (GR_node001)
    port rtoe-GR_node001
        mac: "00:1d:d8:b7:1c:0b"
        networks: ["172.16.126.208/24"]
    port rtoj-GR_node001
        mac: "00:00:00:AC:F2:C7"
        networks: ["100.64.1.4/24"]
    nat f83d01d5-bb8f-4571-ba5b-5325f4f4ccde
        external ip: "172.16.126.208"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 665ac94e-d920-422f-b5e9-97a7910caa4e (GR_master3)
    port rtoe-GR_master3
        mac: "3a:d6:a2:d6:b9:42"
        networks: ["169.254.33.2/24"]
    port rtoj-GR_master3
        mac: "00:00:00:BB:2E:13"
        networks: ["100.64.1.12/24"]
    nat 6a437615-0e4a-464c-a672-772dcf34699a
        external ip: "169.254.33.2"
        logical ip: "10.112.0.0/12"
        type: "snat"
router c4b0216f-6b1a-4e5f-b3dd-ecdc45cd049c (GR_node006)
    port rtoj-GR_node006
        mac: "00:00:00:1C:7D:27"
        networks: ["100.64.1.10/24"]
    port rtoe-GR_node006
        mac: "00:1d:d8:b7:1c:10"
        networks: ["172.16.126.213/24"]
    nat f12911f3-1baa-4c91-8eda-0d1e6615a113
        external ip: "172.16.126.213"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 635f9bdf-7aa7-497e-bb17-2df0f6dbc683 (GR_master2)
    port rtoj-GR_master2
        mac: "00:00:00:16:67:B9"
        networks: ["100.64.1.3/24"]
    port rtoe-GR_master2
        mac: "32:56:c1:1e:fd:44"
        networks: ["169.254.33.2/24"]
    nat 67804c78-1c93-4441-ad40-997fdb953dd6
        external ip: "169.254.33.2"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 4e18b9d7-f6fb-4deb-890b-f49621053203 (GR_node003)
    port rtoj-GR_node003
        mac: "00:00:00:BB:50:0F"
        networks: ["100.64.1.7/24"]
    port rtoe-GR_node003
        mac: "00:1d:d8:b7:1c:0d"
        networks: ["172.16.126.210/24"]
    nat 42de066b-a272-48b2-a780-07fe5469ca4f
        external ip: "172.16.126.210"
        logical ip: "10.112.0.0/12"
        type: "snat"

I have followed @shettyg 's instruction that mentioned int the thread #405 to delete the addtional ports from logical switch "ext_XXXX", it still not work, and the addional port with unknown addresses will be created back after ovnkube restart

ovn-nbctl lsp-del breth0_node001 && \
ovn-nbctl lsp-del breth0_node002 && \
ovn-nbctl lsp-del breth0_node003 && \
ovn-nbctl lsp-del breth0_node004 && \
ovn-nbctl lsp-del breth0_node005 && \
ovn-nbctl lsp-del breth0_node006 && \
ovn-nbctl lsp-del breth0_node007 && \
ovn-nbctl lsp-del breth0_node008
ylhyh commented 5 years ago
# ovn-nbctl list load-balancer
_uuid               : fcdb4870-409b-4ce0-86f4-80fc323e5343
external_ids        : {TCP_lb_gateway_router="GR_node003"}
name                : ""
protocol            : []
vips                : {}

_uuid               : f5315290-f4e2-40e2-866d-015ca1f99cbd
external_ids        : {"k8s-cluster-lb-udp"=yes}
name                : ""
protocol            : udp
vips                : {"10.96.0.10:53"="10.112.0.3:53,10.112.1.3:53,10.112.2.3:53"}

_uuid               : 034f8bb2-d681-451f-873f-36a922ca7703
external_ids        : {UDP_lb_gateway_router="GR_node004"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 6b19d272-3ae9-443a-ae5d-7a3f8925207e
external_ids        : {TCP_lb_gateway_router="GR_node008"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 77476823-b4f1-437b-b81e-179a5e45dd58
external_ids        : {UDP_lb_gateway_router="GR_node006"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 6bdde154-72b7-4fd4-8d99-1a6af4e46d15
external_ids        : {TCP_lb_gateway_router="GR_node001"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 023a0684-2b07-4189-89a1-99fba74c76e7
external_ids        : {UDP_lb_gateway_router="GR_node001"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 996ff874-4385-4054-afaf-1822802701db
external_ids        : {TCP_lb_gateway_router="GR_node002"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 378d7525-ee75-4c32-b088-c729d5fbaaf1
external_ids        : {UDP_lb_gateway_router="GR_node007"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : bab01c46-0a1f-41a2-be6b-418f07ca1d1a
external_ids        : {UDP_lb_gateway_router="GR_node008"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : c665cdc6-0055-4b48-9c09-eb22a623685e
external_ids        : {TCP_lb_gateway_router="GR_node007"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 80c7e858-e165-4f05-80e9-df6f6420409c
external_ids        : {UDP_lb_gateway_router="GR_node002"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 5eb463e8-cc95-445c-a5e0-d78fad2a9be3
external_ids        : {"k8s-cluster-lb-tcp"=yes}
name                : ""
protocol            : tcp
vips                : {"10.105.116.76:80"="10.112.4.4:80", "10.96.0.10:53"="10.112.0.3:53,10.112.1.3:53,10.112.2.3:53", "10.96.0.1:443"="172.16.126.202:6443,172.16.126.203:6443,172.16.126.204:6443"}

_uuid               : 2d47a839-c91d-42b0-a2df-300fcf8998ac
external_ids        : {TCP_lb_gateway_router="GR_node004"}
name                : ""
protocol            : []
vips                : {}

_uuid               : e46b4afa-08ef-4151-9ae9-c816b5fcb4cf
external_ids        : {UDP_lb_gateway_router="GR_node003"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 817822be-b78a-456e-b5b6-98c746458ed3
external_ids        : {TCP_lb_gateway_router="GR_node006"}
name                : ""
protocol            : []
vips                : {}
girishmg commented 5 years ago

@ylhyh From the output you have show, I see several issues.

It appears that you have configured both type of gateways in your cluster. You must choose only one.

  1. --init-gateways --init-localnet
  2. --init-gatewasy (with no option we fallback to shared gateway interface, in your case it is eth0)

If the intent is (2), then delete br-localnet on all of the nodes. In this mode, we don't use kube-proxy at all. You can safely remove the kube-proxy deployment.

Furthermore, in mode (2) you will need breth0_node00X ports. So, don't delete it.

Lets start from deleting br-localnet.

ylhyh commented 5 years ago

Initially I have configured --init-gateways on every node (include master), but the pods which scheduled on master node cannot access to k8s api server on the same machine, so I have added localnet flag to init gateway on master nodes .

The default gateway on ovn_cluster_router is ext_node001, not master's local gateway' why it is blocks NodePort service? Should I remove --init-gateways flag from master node?

ylhyh commented 5 years ago

@girishmg, I have deleted the br-localnet interfaces from switches ext_master1/ext_master2/ext_master3:

ovn-nbctl lsp-del br-localnet_master1
ovn-nbctl lsp-del br-localnet_master2
ovn-nbctl lsp-del br-localnet_master3

See the Northbound db content after delete the localnet interfaces:

# ovn-nbctl show
switch 1bd13c40-e7ec-4844-aaf7-53ec0be482fa (ext_master2)
    port etor-GR_master2
        type: router
        addresses: ["32:56:c1:1e:fd:44"]
        router-port: rtoe-GR_master2
switch 99f97b71-5545-4411-be33-ff772ea7c251 (node003)
    port stor-node003
        type: router
        addresses: ["00:00:00:A6:08:27"]
        router-port: rtos-node003
    port k8s-node003
        addresses: ["76:07:89:4d:93:49 10.112.5.2"]
    port default_busybox-g8l9n
        addresses: ["0a:00:00:00:00:19 10.112.5.3"]
switch 4d19b592-7716-4d0a-8515-8af393b3ab6a (node005)
    port stor-node005
        type: router
        addresses: ["00:00:00:09:7A:04"]
        router-port: rtos-node005
    port default_busybox-kp7nt
        addresses: ["0a:00:00:00:00:1b 10.112.7.3"]
    port k8s-node005
        addresses: ["92:f9:d4:b0:da:4b 10.112.7.2"]
    port kong_kong-7d59b44689-xnbss
        addresses: ["dynamic"]
switch 635f0dc2-1412-4558-a6e7-57ab8d5b3703 (ext_node004)
    port etor-GR_node004
        type: router
        addresses: ["00:1d:d8:b7:1c:0e"]
        router-port: rtoe-GR_node004
switch d87fc4c2-41ef-4195-ac16-998962ebfc93 (master1)
    port stor-master1
        type: router
        addresses: ["00:00:00:BF:24:EE"]
        router-port: rtos-master1
    port kube-system_coredns-s5fxx
        addresses: ["0a:00:00:00:00:10 10.112.0.3"]
    port k8s-master1
        addresses: ["f6:94:d6:5c:65:01 10.112.0.2"]
    port default_busybox-fqvpj
        addresses: ["0a:00:00:00:00:1f 10.112.0.4"]
switch e1fb1473-f3aa-4220-ae45-14757c99bc36 (node002)
    port stor-node002
        type: router
        addresses: ["00:00:00:0D:A7:8C"]
        router-port: rtos-node002
    port default_busybox-fp82w
        addresses: ["0a:00:00:00:00:1e 10.112.4.3"]
    port k8s-node002
        addresses: ["22:9b:bf:19:94:fc 10.112.4.2"]
    port default_nginx-test-6489dfd864-sqcvl
        addresses: ["0a:00:00:00:00:0e 10.112.4.4"]
switch ef1b3093-199e-4350-a701-024a4e46df6c (ext_node006)
    port etor-GR_node006
        type: router
        addresses: ["00:1d:d8:b7:1c:10"]
        router-port: rtoe-GR_node006
switch dfac86c3-661f-4a14-89e4-c64e0d5d263a (node001)
    port kong_kong-ingress-controller-b657565d4-tmvmr
        addresses: ["dynamic"]
    port k8s-node001
        addresses: ["3e:ee:84:10:ed:ab 10.112.3.2"]
    port stor-node001
        type: router
        addresses: ["00:00:00:AC:D7:4C"]
        router-port: rtos-node001
    port default_busybox-fjbjj
        addresses: ["0a:00:00:00:00:20 10.112.3.3"]
switch 4555adc4-6990-4806-876f-b9c24567e527 (ext_node007)
    port etor-GR_node007
        type: router
        addresses: ["00:1d:d8:b7:1c:11"]
        router-port: rtoe-GR_node007
switch 6d35cfb4-5152-404b-804a-03c00788e6e1 (node006)
    port k8s-node006
        addresses: ["82:98:25:84:12:25 10.112.8.2"]
    port default_busybox-hbxp9
        addresses: ["0a:00:00:00:00:1c 10.112.8.3"]
    port stor-node006
        type: router
        addresses: ["00:00:00:AD:FE:E7"]
        router-port: rtos-node006
switch d9f5f7f5-a982-4a82-a8bd-da6ec5a01c18 (ext_master3)
    port etor-GR_master3
        type: router
        addresses: ["3a:d6:a2:d6:b9:42"]
        router-port: rtoe-GR_master3
switch c590c876-767c-41cf-a35e-ca5aa183bfe0 (ext_node003)
    port etor-GR_node003
        type: router
        addresses: ["00:1d:d8:b7:1c:0d"]
        router-port: rtoe-GR_node003
switch a53504e6-0704-4221-a2fb-85ffccd34537 (join)
    port jtor-GR_node006
        type: router
        addresses: ["00:00:00:1C:7D:27"]
        router-port: rtoj-GR_node006
    port jtor-GR_node005
        type: router
        addresses: ["00:00:00:76:EB:3F"]
        router-port: rtoj-GR_node005
    port jtor-GR_node007
        type: router
        addresses: ["00:00:00:B8:72:11"]
        router-port: rtoj-GR_node007
    port jtor-GR_master1
        type: router
        addresses: ["00:00:00:FE:FB:9F"]
        router-port: rtoj-GR_master1
    port jtor-GR_master3
        type: router
        addresses: ["00:00:00:BB:2E:13"]
        router-port: rtoj-GR_master3
    port jtor-ovn_cluster_router
        type: router
        addresses: ["00:00:00:3D:3B:6B"]
        router-port: rtoj-ovn_cluster_router
    port jtor-GR_node004
        type: router
        addresses: ["00:00:00:BB:AE:54"]
        router-port: rtoj-GR_node004
    port jtor-GR_node001
        type: router
        addresses: ["00:00:00:AC:F2:C7"]
        router-port: rtoj-GR_node001
    port jtor-GR_node002
        type: router
        addresses: ["00:00:00:8F:28:28"]
        router-port: rtoj-GR_node002
    port jtor-GR_node003
        type: router
        addresses: ["00:00:00:BB:50:0F"]
        router-port: rtoj-GR_node003
    port jtor-GR_master2
        type: router
        addresses: ["00:00:00:16:67:B9"]
        router-port: rtoj-GR_master2
    port jtor-GR_node008
        type: router
        addresses: ["00:00:00:88:56:AD"]
        router-port: rtoj-GR_node008
switch 6904ae77-492a-4ba9-967c-743e94fc4c51 (ext_node002)
    port etor-GR_node002
        type: router
        addresses: ["00:1d:d8:b7:1c:0c"]
        router-port: rtoe-GR_node002
switch 8a7ec017-1e5c-497c-84a5-87fbe280b5cf (master3)
    port kube-system_coredns-52zkg
        addresses: ["0a:00:00:00:00:15 10.112.2.3"]
    port default_busybox-9tcsr
        addresses: ["0a:00:00:00:00:16 10.112.2.4"]
    port stor-master3
        type: router
        addresses: ["00:00:00:70:63:94"]
        router-port: rtos-master3
    port k8s-master3
        addresses: ["62:bc:ad:44:2a:ab 10.112.2.2"]
switch 506aad59-f535-49ed-9315-d3ab74086ff2 (node007)
    port default_busybox-hcfb7
        addresses: ["0a:00:00:00:00:1d 10.112.9.3"]
    port k8s-node007
        addresses: ["82:79:77:24:85:de 10.112.9.2"]
    port stor-node007
        type: router
        addresses: ["00:00:00:69:92:FA"]
        router-port: rtos-node007
switch 0e4d5919-fffa-4f63-ab5b-04c8cad37526 (ext_master1)
    port etor-GR_master1
        type: router
        addresses: ["e2:54:54:a3:36:41"]
        router-port: rtoe-GR_master1
switch d243c3a9-219d-4a49-9b5e-d9e6ae3f00d7 (node004)
    port stor-node004
        type: router
        addresses: ["00:00:00:16:F2:4B"]
        router-port: rtos-node004
    port kong_kong-migrations-2qsp5
        addresses: ["dynamic"]
    port k8s-node004
        addresses: ["7e:20:db:01:10:4d 10.112.6.2"]
    port default_busybox-ts5p8
        addresses: ["0a:00:00:00:00:18 10.112.6.3"]
switch d76f2b7c-366c-4066-80ab-aa7e112a2652 (node008)
    port default_busybox-wlpf2
        addresses: ["0a:00:00:00:00:1a 10.112.10.3"]
    port stor-node008
        type: router
        addresses: ["00:00:00:1B:25:DC"]
        router-port: rtos-node008
    port k8s-node008
        addresses: ["02:65:bb:66:c4:3b 10.112.10.2"]
switch 1bd18f1c-4285-4104-bd2b-b864870cc48b (ext_node001)
    port etor-GR_node001
        type: router
        addresses: ["00:1d:d8:b7:1c:0b"]
        router-port: rtoe-GR_node001
    port breth0_node001
        addresses: ["unknown"]
switch 8573c13e-4366-4712-8b74-4c36b65ab6a9 (ext_node005)
    port etor-GR_node005
        type: router
        addresses: ["00:1d:d8:b7:1c:0f"]
        router-port: rtoe-GR_node005
switch 48d3525e-c3d7-4932-bd31-6bd494b98ed3 (master2)
    port default_busybox-6s2p8
        addresses: ["0a:00:00:00:00:02 10.112.1.4"]
    port k8s-master2
        addresses: ["26:22:6b:1d:30:68 10.112.1.2"]
    port stor-master2
        type: router
        addresses: ["00:00:00:59:7C:A4"]
        router-port: rtos-master2
    port kube-system_coredns-shl6x
        addresses: ["0a:00:00:00:00:12 10.112.1.3"]
switch 7d1f8ccf-4014-4b2a-92e9-c29ea9e2f36c (ext_node008)
    port etor-GR_node008
        type: router
        addresses: ["00:1d:d8:b7:1c:12"]
        router-port: rtoe-GR_node008
router 22e18eb0-a4b2-4d53-aea7-0729fefe5287 (GR_node008)
    port rtoj-GR_node008
        mac: "00:00:00:88:56:AD"
        networks: ["100.64.1.6/24"]
    port rtoe-GR_node008
        mac: "00:1d:d8:b7:1c:12"
        networks: ["172.16.126.215/24"]
    nat 97b3c687-7fd6-4a98-87a6-f1aaae52936d
        external ip: "172.16.126.215"
        logical ip: "10.112.0.0/12"
        type: "snat"
router a06f75e1-9615-43f8-8a84-64f227d31d4a (GR_node007)
    port rtoj-GR_node007
        mac: "00:00:00:B8:72:11"
        networks: ["100.64.1.11/24"]
    port rtoe-GR_node007
        mac: "00:1d:d8:b7:1c:11"
        networks: ["172.16.126.214/24"]
    nat 98230eb7-f772-4cd9-8197-fd478028f688
        external ip: "172.16.126.214"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 03721222-6562-4dc3-98fb-ba83c8cdc433 (GR_node004)
    port rtoj-GR_node004
        mac: "00:00:00:BB:AE:54"
        networks: ["100.64.1.8/24"]
    port rtoe-GR_node004
        mac: "00:1d:d8:b7:1c:0e"
        networks: ["172.16.126.211/24"]
    nat 06da05e0-056f-4e01-8299-19545a35d7b6
        external ip: "172.16.126.211"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 435a4d1a-bc36-483e-935e-9d24488ffaf8 (GR_node005)
    port rtoe-GR_node005
        mac: "00:1d:d8:b7:1c:0f"
        networks: ["172.16.126.212/24"]
    port rtoj-GR_node005
        mac: "00:00:00:76:EB:3F"
        networks: ["100.64.1.9/24"]
    nat e447866d-d296-4987-ab4d-a35139963034
        external ip: "172.16.126.212"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 5dc98cfc-d348-4ae7-98b5-b91733d51f0e (ovn_cluster_router)
    port rtos-master1
        mac: "00:00:00:BF:24:EE"
        networks: ["10.112.0.1/24"]
    port rtos-node002
        mac: "00:00:00:0D:A7:8C"
        networks: ["10.112.4.1/24"]
    port rtos-master2
        mac: "00:00:00:59:7C:A4"
        networks: ["10.112.1.1/24"]
    port rtos-node008
        mac: "00:00:00:1B:25:DC"
        networks: ["10.112.10.1/24"]
    port rtos-master3
        mac: "00:00:00:70:63:94"
        networks: ["10.112.2.1/24"]
    port rtos-node005
        mac: "00:00:00:09:7A:04"
        networks: ["10.112.7.1/24"]
    port rtos-node001
        mac: "00:00:00:AC:D7:4C"
        networks: ["10.112.3.1/24"]
    port rtoj-ovn_cluster_router
        mac: "00:00:00:3D:3B:6B"
        networks: ["100.64.1.1/24"]
    port rtos-node007
        mac: "00:00:00:69:92:FA"
        networks: ["10.112.9.1/24"]
    port rtos-node006
        mac: "00:00:00:AD:FE:E7"
        networks: ["10.112.8.1/24"]
    port rtos-node004
        mac: "00:00:00:16:F2:4B"
        networks: ["10.112.6.1/24"]
    port rtos-node003
        mac: "00:00:00:A6:08:27"
        networks: ["10.112.5.1/24"]
router dfc3c545-42a3-45c2-8190-bb3e115536ce (GR_node002)
    port rtoe-GR_node002
        mac: "00:1d:d8:b7:1c:0c"
        networks: ["172.16.126.209/24"]
    port rtoj-GR_node002
        mac: "00:00:00:8F:28:28"
        networks: ["100.64.1.5/24"]
    nat b68869c3-5bac-4fd9-a352-0199f0f4709f
        external ip: "172.16.126.209"
        logical ip: "10.112.0.0/12"
        type: "snat"
router aa04bf0e-9462-414f-98aa-2a0049246778 (GR_master1)
    port rtoe-GR_master1
        mac: "e2:54:54:a3:36:41"
        networks: ["169.254.33.2/24"]
    port rtoj-GR_master1
        mac: "00:00:00:FE:FB:9F"
        networks: ["100.64.1.2/24"]
    nat 67fe3714-11ff-4cd0-8b6a-dd70853a0fc7
        external ip: "169.254.33.2"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 936ea4ce-08fd-4c1c-89ab-00406698f9db (GR_node001)
    port rtoe-GR_node001
        mac: "00:1d:d8:b7:1c:0b"
        networks: ["172.16.126.208/24"]
    port rtoj-GR_node001
        mac: "00:00:00:AC:F2:C7"
        networks: ["100.64.1.4/24"]
    nat f83d01d5-bb8f-4571-ba5b-5325f4f4ccde
        external ip: "172.16.126.208"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 665ac94e-d920-422f-b5e9-97a7910caa4e (GR_master3)
    port rtoe-GR_master3
        mac: "3a:d6:a2:d6:b9:42"
        networks: ["169.254.33.2/24"]
    port rtoj-GR_master3
        mac: "00:00:00:BB:2E:13"
        networks: ["100.64.1.12/24"]
    nat 6a437615-0e4a-464c-a672-772dcf34699a
        external ip: "169.254.33.2"
        logical ip: "10.112.0.0/12"
        type: "snat"
router c4b0216f-6b1a-4e5f-b3dd-ecdc45cd049c (GR_node006)
    port rtoj-GR_node006
        mac: "00:00:00:1C:7D:27"
        networks: ["100.64.1.10/24"]
    port rtoe-GR_node006
        mac: "00:1d:d8:b7:1c:10"
        networks: ["172.16.126.213/24"]
    nat f12911f3-1baa-4c91-8eda-0d1e6615a113
        external ip: "172.16.126.213"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 635f9bdf-7aa7-497e-bb17-2df0f6dbc683 (GR_master2)
    port rtoj-GR_master2
        mac: "00:00:00:16:67:B9"
        networks: ["100.64.1.3/24"]
    port rtoe-GR_master2
        mac: "32:56:c1:1e:fd:44"
        networks: ["169.254.33.2/24"]
    nat 67804c78-1c93-4441-ad40-997fdb953dd6
        external ip: "169.254.33.2"
        logical ip: "10.112.0.0/12"
        type: "snat"
router 4e18b9d7-f6fb-4deb-890b-f49621053203 (GR_node003)
    port rtoj-GR_node003
        mac: "00:00:00:BB:50:0F"
        networks: ["100.64.1.7/24"]
    port rtoe-GR_node003
        mac: "00:1d:d8:b7:1c:0d"
        networks: ["172.16.126.210/24"]
    nat 42de066b-a272-48b2-a780-07fe5469ca4f
        external ip: "172.16.126.210"
        logical ip: "10.112.0.0/12"
        type: "snat"

The default route is pointing to 100.64.1.2/rtoj-GR_master1 on GR_master1 router:

# ovn-nbctl lr-route-list ovn_cluster_router
IPv4 Routes
            10.112.0.0/24                100.64.1.2 src-ip
            10.112.1.0/24                100.64.1.3 src-ip
            10.112.2.0/24               100.64.1.12 src-ip
            10.112.3.0/24                100.64.1.4 src-ip
            10.112.4.0/24                100.64.1.5 src-ip
            10.112.5.0/24                100.64.1.7 src-ip
            10.112.6.0/24                100.64.1.8 src-ip
            10.112.7.0/24                100.64.1.9 src-ip
            10.112.8.0/24               100.64.1.10 src-ip
            10.112.9.0/24               100.64.1.11 src-ip
           10.112.10.0/24                100.64.1.6 src-ip
                0.0.0.0/0                100.64.1.2 dst-ip

Because GR_master1 initialized by localnet, so I changed the default route to 100.64.1.4/rtoj-GR_node001 by following command-line:

ovn-nbctl lr-route-del ovn_cluster_router 0.0.0.0/0
ovn-nbctl lr-route-add ovn_cluster_router 0.0.0.0/0 100.64.1.4

See the updated route table below:

# ovn-nbctl lr-route-list ovn_cluster_router
IPv4 Routes
            10.112.0.0/24                100.64.1.2 src-ip
            10.112.1.0/24                100.64.1.3 src-ip
            10.112.2.0/24               100.64.1.12 src-ip
            10.112.3.0/24                100.64.1.4 src-ip
            10.112.4.0/24                100.64.1.5 src-ip
            10.112.5.0/24                100.64.1.7 src-ip
            10.112.6.0/24                100.64.1.8 src-ip
            10.112.7.0/24                100.64.1.9 src-ip
            10.112.8.0/24               100.64.1.10 src-ip
            10.112.9.0/24               100.64.1.11 src-ip
           10.112.10.0/24                100.64.1.6 src-ip
                0.0.0.0/0                100.64.1.4 dst-ip

The NodePort is still not accessible :(

girishmg commented 5 years ago

default nginx-test NodePort 10.105.116.76 80:80/TCP 2d1h

In you kubectl get svc output above, I am surprised that your NodePort is 80. it should generally be in the range of 30000 to 32767.

Also, your ovn-nbctl lb-list doesn't show any LB rules for the node port. I was expecting one LB rule for every K8s node at port 80 to be forwarded to the nginx pod.

ylhyh commented 5 years ago

@girishmg I have changed the NodePort range from 30000-32767 to 80-32767in apiserver configuration:

# cat /etc/kubernetes/manifests/kube-apiserver.yaml 
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=172.16.126.203
    - --authorization-mode=Node,RBAC
    - --service-node-port-range=80-32767
...

So the 80 in output of "ovn-nbctl lb-list" is the node port.

ylhyh commented 5 years ago

@girishmg, the issue has been resolved by manually adding load balancer record to the gateway router. It seems that it is not the matter of localnet gateway on master node.

After double check @lanoxx's post at https://github.com/openvswitch/ovn-kubernetes/issues/611#issuecomment-464005576, I am awared of that there is no VIP get created for the physical IP of minion node. See details below.

I have created a NodePort service with the node port 80:

[root@master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
nginx-test   NodePort    10.103.193.224   <none>        80:80/TCP   133m

Check lb-list, OVN only created VIP for cluster IP address, see "10.103.193.224:80" below:

[root@master1 ~]# ovn-nbctl lb-list
UUID                                    LB                  PROTO      VIP                  IPs
6cae3426-dab5-4310-b4ac-166d0d2abcee                        tcp        10.103.193.224:80    10.112.5.3:80
                                                            tcp        10.96.0.10:53        10.112.0.3:53,10.112.1.3:53,10.112.2.3:53
                                                            tcp        10.96.0.1:443        172.16.126.202:6443,172.16.126.203:6443,172.16.126.204:6443
6175ecc1-2ddc-42cd-b0fd-43b9ec22210b                        udp        10.96.0.10:53        10.112.0.3:53,10.112.1.3:53,10.112.2.3:53

It didn't create VIP for the physical IP of minion node, I have tried to lookup the load-balance id for the gateway router of the minion node from output of "ovn-nbctl list load-balancer", and create VIP for node IP manually:

# ovn-nbctl lb-add 6f4e2ec2-135d-41de-9960-b92f2072e4f7 172.16.126.210:80 10.112.5.3:80
# ovn-nbctl lb-list
UUID                                    LB                  PROTO      VIP                  IPs
6cae3426-dab5-4310-b4ac-166d0d2abcee                        tcp        10.103.193.224:80    10.112.5.3:80
                                                            tcp        10.96.0.10:53        10.112.0.3:53,10.112.1.3:53,10.112.2.3:53
                                                            tcp        10.96.0.1:443        172.16.126.202:6443,172.16.126.203:6443,172.16.126.204:6443
6175ecc1-2ddc-42cd-b0fd-43b9ec22210b                        udp        10.96.0.10:53        10.112.0.3:53,10.112.1.3:53,10.112.2.3:53
6f4e2ec2-135d-41de-9960-b92f2072e4f7                        (null)     172.16.126.210:80    10.112.5.3:80

Then I can access the NodePort service anywhere, include on the minion node itself:

[root@repo conf]# curl -i http://172.16.126.210
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Tue, 05 Mar 2019 11:44:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 31 Jan 2019 23:37:45 GMT

Now the only question is: How to make OVN create the VIP for minion IP for NodePort service automatically? Am I missing any configuration steps?

girishmg commented 5 years ago

Also, your ovn-nbctl lb-list doesn't show any LB rules for the node port. I was expecting one LB rule for every K8s node at port 80 to be forwarded to the nginx pod.

@ylhyh the above comment I made early on captures what you saw.

Can you check if the ovnkube daemons are running on each of the node? These daemons are the ones that add OpenFlow rules to OVS bridge on the K8s node. Can you check the logs of the onvkube on the K8s nodes for any errors.

The callback gateway_shared_intf.go`addService() should have added the required OpenFlow rules.

ylhyh commented 5 years ago

@girishmg , The ovnkube daemons on every node are runing without error.

I have tried to Create / Edit / Delete a NodePort services:

Create After a NodePort service created on k8s, the OpenFlow rules will get created immediately with correct Node Port: cookie=0x0, duration=3.108s, table=0, n_packets=0, n_bytes=0, priority=100,tcp,in_port=eth0,tp_dst=30111 actions=output:"k8s-patch-breth"

Delete After a NodePort service deleted on k8s, the OpenFlow rules will get deleted immediately if ovnkube can find a rule with the matched port.

Edit If I changed the Port number for a NodePort service, the OpenFlow rules will not get updated, the old rule with the incorrect port number will stays in the rule list forever, And there is no new rule with new Port number to be get created. Is that a bug? or a configuration issue?

But, even the ovnkube created a correct OpenFlow rule, the NodePort service is still not accessible. unless I manually create a load balancer VIP for the node IP address. When ovnkube calling "ovn-nbctl lb-add ... " to create load balancer rule?

girishmg commented 5 years ago

I have tried to Create / Edit / Delete a NodePort services:

Create After a NodePort service created on k8s, the OpenFlow rules will get created immediately with correct Node Port: cookie=0x0, duration=3.108s, table=0, n_packets=0, n_bytes=0, priority=100,tcp,in_port=eth0,tp_dst=30111 actions=output:"k8s-patch-breth"

Right. However, the Load Balancer rules aren't added until you create at least one Pod that match the service you created above.

Edit If I changed the Port number for a NodePort service, the OpenFlow rules will not get updated, the old rule with the incorrect port number will stays in the rule list forever, And there is no new rule with new Port number to be get created. Is that a bug? or a configuration issue?

We haven't implemented this yet. So, you will need to delete the NodePort service and create a new one if you modify the NodePort service.

But, even the ovnkube created a correct OpenFlow rule, the NodePort service is still not accessible. unless I manually create a load balancer VIP for the node IP address. When ovnkube calling "ovn-nbctl lb-add ... " to create load balancer rule?

I think something very basic is incorrect on your setup. It just works fine for me and others I know.

Can you delete all the services and create a new NodePort service for nginx-test and create the necessary PODs (Endpoints) for that service. Also, please share ovnkube-master.log with us, if possible. Do you see any obvious errors in that log?

ylhyh commented 5 years ago

@girishmg

I have a yaml file named "deploy-nginx-test.yaml" which declared the nginx-test deployment and service:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-test
  labels:
    app: nginx-test
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      nodeSelector:
        kubernetes.io/role: node
        beta.kubernetes.io/os: linux
      containers:
      - name: nginx
        image: nginx:1.14-alpine
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-test
  namespace: default
spec:
  type: NodePort
  externalTrafficPolicy: Local # expose real client IP
  ports:
  - name: nginx-http
    nodePort: 30118
    port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: nginx-test

According to your suggestion, I have ran the following steps:

  1. Stop ovn-kubernetes on every node (k8s masters and k8s minions, the same below when I mention "every node") by: systemctl stop ovn-kubernetes ovn-kubernetes is the service name I have created with systemd
  2. Clear ovnkube log on every node by: rm -f /var/log/openvswitch/ovn-kubernetes.log
  3. Start ovn-kubernetes on every node by: systemctl start ovn-kubernetes
  4. Delete the existing service and pod by execute below shell on master1 node:
    # kubectl delete -f deploy-nginx-test.yaml 
    deployment.apps "nginx-test" deleted
    service "nginx-test" deleted

    Check the remaining services:

    # kubectl get svc --all-namespaces
    NAMESPACE     NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
    default       kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP         8d
    kube-system   kube-dns     ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP   8d
  5. Create nginx-test service and pod again by execute below shell on master1 node:
    # kubectl create -f deploy-nginx-test.yaml 
    deployment.apps/nginx-test created
    service/nginx-test created

    Check the services again:

    # kubectl get svc --all-namespaces
    NAMESPACE     NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
    default       kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP         8d
    default       nginx-test   NodePort    10.101.145.3   <none>        80:30118/TCP    11m
    kube-system   kube-dns     ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP   8d

Now, let's check the outputs:

ovn-kubernets.log on master1:

# tail -f /var/log/openvswitch/ovn-kubernetes.log 
time="2019-03-07T09:23:23+08:00" level=info msg="Node master1 ready for ovn initialization with subnet 10.112.0.0/24"
time="2019-03-07T09:24:55+08:00" level=info msg="Deleting pod: nginx-test-6489dfd864-952pn"
time="2019-03-07T09:26:14+08:00" level=info msg="Setting annotations ovn={\\\"ip_address\\\":\\\"10.112.5.4/24\\\", \\\"mac_address\\\":\\\"0a:00:00:00:00:25\\\", \\\"gateway_ip\\\": \\\"10.112.5.1\\\"} on pod nginx-test-6489dfd864-7fr6k"

ovn-kubernetes.log on node001 (the old "nginx-test" pod which scheduled on):

# tail -f /var/log/openvswitch/ovn-kubernetes.log 
time="2019-03-07T09:23:04+08:00" level=info msg="Node node001 ready for ovn initialization with subnet 10.112.3.0/24"
time="2019-03-07T09:24:53+08:00" level=info msg="Waiting for DEL result for pod default/nginx-test-6489dfd864-952pn"
time="2019-03-07T09:24:53+08:00" level=info msg="Dispatching pod network request &{DEL default nginx-test-6489dfd864-952pn bba03d3aaf5ae67f35e995fcba1464da5f5e6b95af8192a35c173f4e30103025 /proc/7642/ns/net eth0 0xc00075adc0 0xc000638060}"
time="2019-03-07T09:24:53+08:00" level=info msg="Returning pod network request &{DEL default nginx-test-6489dfd864-952pn bba03d3aaf5ae67f35e995fcba1464da5f5e6b95af8192a35c173f4e30103025 /proc/7642/ns/net eth0 0xc00075adc0 0xc000638060}, result  err <nil>"

ovn-kubernetes.log on node003 (the new "nginx-test" pod which scheduled on):

# tail -f /var/log/openvswitch/ovn-kubernetes.log 
time="2019-03-07T09:23:04+08:00" level=info msg="Node node003 ready for ovn initialization with subnet 10.112.5.0/24"
time="2019-03-07T09:26:15+08:00" level=info msg="Waiting for ADD result for pod default/nginx-test-6489dfd864-7fr6k"
time="2019-03-07T09:26:15+08:00" level=info msg="Dispatching pod network request &{ADD default nginx-test-6489dfd864-7fr6k bfa13fb65b38b2c86be116d578680cd4582c0a5a8af92a70185a4fa2ab76a91a /proc/21250/ns/net eth0 0xc00089cf00 0xc0002de6c0}"
time="2019-03-07T09:26:15+08:00" level=info msg="Returning pod network request &{ADD default nginx-test-6489dfd864-7fr6k bfa13fb65b38b2c86be116d578680cd4582c0a5a8af92a70185a4fa2ab76a91a /proc/21250/ns/net eth0 0xc00089cf00 0xc0002de6c0}, result {\"interfaces\":[{\"name\":\"bfa13fb65b38b2c\",\"mac\":\"62:9c:3f:a8:a4:3c\"},{\"name\":\"eth0\",\"mac\":\"0a:00:00:00:00:24\",\"sandbox\":\"/proc/21250/ns/net\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.112.5.4/24\",\"gateway\":\"10.112.5.1\"}],\"dns\":{}} err <nil>"

Check lb-list on master1:

# ovn-nbctl lb-list
UUID                                    LB                  PROTO      VIP                IPs
047fa46a-cf9f-4a36-af08-fbd342d422e5                        udp        10.96.0.10:53      10.112.0.3:53,10.112.1.4:53,10.112.2.4:53
34b6ea3e-d9b8-4c91-b0a4-958168d8262e                        tcp        10.101.145.3:80    10.112.5.4:80
                                                            tcp        10.96.0.10:53      10.112.0.3:53,10.112.1.4:53,10.112.2.4:53
                                                            tcp        10.96.0.1:443      172.16.126.202:6443,172.16.126.203:6443,172.16.126.204:6443

Check content of table "load-balancer" on master1:

# ovn-nbctl list load-balancer
_uuid               : cf5d1b80-8361-4236-a10d-ca815e4264f0
external_ids        : {TCP_lb_gateway_router="GR_node007"}
name                : ""
protocol            : []
vips                : {}

_uuid               : e05eead0-7259-4ac7-8d2c-572097f211e4
external_ids        : {TCP_lb_gateway_router="GR_node002"}
name                : ""
protocol            : []
vips                : {}

_uuid               : f566bcd1-718e-4a6a-ae60-bca209b96bb3
external_ids        : {TCP_lb_gateway_router="GR_node008"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 46366b73-3d9e-4b3e-9035-a4e24a0b5412
external_ids        : {TCP_lb_gateway_router="GR_node006"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 550ae8f3-55eb-45dc-b3cf-b5abfd9157da
external_ids        : {UDP_lb_gateway_router="GR_node003"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 26b43c04-c050-4265-90dc-000282280ad6
external_ids        : {TCP_lb_gateway_router="GR_node003"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 80b78496-03b7-4176-9b87-e47430ce0efb
external_ids        : {UDP_lb_gateway_router="GR_node006"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : d279d9f8-0171-451e-bd99-c4644a27d20d
external_ids        : {UDP_lb_gateway_router="GR_node004"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 8d5aad08-af4b-48c6-9625-ae74ec7e46c2
external_ids        : {UDP_lb_gateway_router="GR_node008"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 3c43aef8-47db-4a81-86db-5cb33c5d2bfb
external_ids        : {UDP_lb_gateway_router="GR_node002"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 38b047c9-033f-4c36-a398-65dec072b020
external_ids        : {TCP_lb_gateway_router="GR_node001"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 047fa46a-cf9f-4a36-af08-fbd342d422e5
external_ids        : {"k8s-cluster-lb-udp"=yes}
name                : ""
protocol            : udp
vips                : {"10.96.0.10:53"="10.112.0.3:53,10.112.1.4:53,10.112.2.4:53"}

_uuid               : 68ebf6fb-8464-4c46-9a6e-b1ca74e111ad
external_ids        : {UDP_lb_gateway_router="GR_node007"}
name                : ""
protocol            : udp
vips                : {}

_uuid               : 075d88ce-6195-4f81-b343-7a43607f8084
external_ids        : {TCP_lb_gateway_router="GR_node004"}
name                : ""
protocol            : []
vips                : {}

_uuid               : 34b6ea3e-d9b8-4c91-b0a4-958168d8262e
external_ids        : {"k8s-cluster-lb-tcp"=yes}
name                : ""
protocol            : tcp
vips                : {"10.101.145.3:80"="10.112.5.4:80", "10.96.0.10:53"="10.112.0.3:53,10.112.1.4:53,10.112.2.4:53", "10.96.0.1:443"="172.16.126.202:6443,172.16.126.203:6443,172.16.126.204:6443"}

_uuid               : 3edae75e-36d2-4dc4-afb0-0b26600bdd14
external_ids        : {UDP_lb_gateway_router="GR_node001"}
name                : ""
protocol            : udp
vips                : {}

Check OpenFlow rules on node003 which new "nginx-test" pod scheduled on:

# ovs-ofctl dump-flows breth0
 cookie=0x0, duration=246.194s, table=0, n_packets=0, n_bytes=0, priority=100,ip,in_port="k8s-patch-breth" actions=ct(commit,zone=64000),output:eth0
 cookie=0x0, duration=246.176s, table=0, n_packets=7715, n_bytes=2596236, priority=50,ip,in_port=eth0 actions=ct(table=1,zone=64000)
 cookie=0x0, duration=57.978s, table=0, n_packets=0, n_bytes=0, priority=100,tcp,in_port=eth0,tp_dst=30118 actions=output:"k8s-patch-breth"
 cookie=0x0, duration=76920.498s, table=0, n_packets=2914216, n_bytes=243746114, priority=0 actions=NORMAL
 cookie=0x0, duration=246.159s, table=1, n_packets=0, n_bytes=0, priority=100,ct_state=+est+trk actions=output:"k8s-patch-breth"
 cookie=0x0, duration=246.143s, table=1, n_packets=0, n_bytes=0, priority=100,ct_state=+rel+trk actions=output:"k8s-patch-breth"
 cookie=0x0, duration=246.127s, table=1, n_packets=7710, n_bytes=2592084, priority=0 actions=LOCAL

Expecting your input...

ylhyh commented 5 years ago

@girishmg Can you also share HOW/WHEN ovnkube create load balancer rule?

ylhyh commented 5 years ago

@girishmg , When I add/delete other kong pods, I can see a transaction error in ovn-kubernetes log on master1:

[root@master1 ~]# tail -f /var/log/openvswitch/ovn-kubernetes.log 
time="2019-03-07T09:23:23+08:00" level=info msg="Node master1 ready for ovn initialization with subnet 10.112.0.0/24"
time="2019-03-07T09:24:55+08:00" level=info msg="Deleting pod: nginx-test-6489dfd864-952pn"
time="2019-03-07T09:26:14+08:00" level=info msg="Setting annotations ovn={\\\"ip_address\\\":\\\"10.112.5.4/24\\\", \\\"mac_address\\\":\\\"0a:00:00:00:00:25\\\", \\\"gateway_ip\\\": \\\"10.112.5.1\\\"} on pod nginx-test-6489dfd864-7fr6k"
time="2019-03-07T11:57:27+08:00" level=info msg="Setting annotations ovn={\\\"ip_address\\\":\\\"10.112.5.5/24\\\", \\\"mac_address\\\":\\\"0a:00:00:00:00:26\\\", \\\"gateway_ip\\\": \\\"10.112.5.1\\\"} on pod kong-ingress-controller-58ddbcdcdf-lwzwv"
time="2019-03-07T11:57:29+08:00" level=info msg="Setting annotations ovn={\\\"ip_address\\\":\\\"10.112.3.3/24\\\", \\\"mac_address\\\":\\\"0a:00:00:00:00:3a\\\", \\\"gateway_ip\\\": \\\"10.112.3.1\\\"} on pod kong-migrations-tpnvg"
time="2019-03-07T11:57:45+08:00" level=info msg="Deleting pod: kong-ingress-controller-58ddbcdcdf-lwzwv"
time="2019-03-07T11:57:48+08:00" level=info msg="Deleting pod: kong-migrations-tpnvg"
time="2019-03-07T12:00:56+08:00" level=error msg="failed to create address_set kong, stderr: \"2019-03-07T04:00:56Z|00002|ovsdb_idl|WARN|transaction error: {\\\"details\\\":\\\"Transaction causes multiple rows in \\\\\\\"Address_Set\\\\\\\" table to have identical values (\\\\\\\"a117315516301719020\\\\\\\") for index on column \\\\\\\"name\\\\\\\".  First row, with UUID 8fb2ba50-8e2d-433c-9eb6-cb472998320f, was inserted by this transaction.  Second row, with UUID e2f0d9df-4d0c-4b79-aeef-83586ab7ae58, existed in the database before this transaction and was not modified by the transaction.\\\",\\\"error\\\":\\\"constraint violation\\\"}\\novn-nbctl: transaction error: {\\\"details\\\":\\\"Transaction causes multiple rows in \\\\\\\"Address_Set\\\\\\\" table to have identical values (\\\\\\\"a117315516301719020\\\\\\\") for index on column \\\\\\\"name\\\\\\\".  First row, with UUID 8fb2ba50-8e2d-433c-9eb6-cb472998320f, was inserted by this transaction.  Second row, with UUID e2f0d9df-4d0c-4b79-aeef-83586ab7ae58, existed in the database before this transaction and was not modified by the transaction.\\\",\\\"error\\\":\\\"constraint violation\\\"}\\n\" (OVN command '/usr/bin/ovn-nbctl --db=tcp:172.16.126.202:6641,tcp:172.16.126.203:6641,tcp:172.16.126.204:6641 --timeout=15 create address_set name=a117315516301719020 external-ids:name=kong' failed: exit status 1)"
time="2019-03-07T12:00:58+08:00" level=info msg="Setting annotations ovn={\\\"ip_address\\\":\\\"10.112.6.4/24\\\", \\\"mac_address\\\":\\\"0a:00:00:00:00:2e\\\", \\\"gateway_ip\\\": \\\"10.112.6.1\\\"} on pod kong-ingress-controller-58ddbcdcdf-crnb7"
time="2019-03-07T12:01:00+08:00" level=info msg="Setting annotations ovn={\\\"ip_address\\\":\\\"10.112.7.4/24\\\", \\\"mac_address\\\":\\\"0a:00:00:00:00:3c\\\", \\\"gateway_ip\\\": \\\"10.112.7.1\\\"} on pod kong-migrations-7zsxj"
zfy3000163 commented 5 years ago

@bsteciuk @danwinship I created a service, in addition to the flow rules, why are there iptables rules? yaml: `apiVersion: v1 kind: Service metadata: creationTimestamp: "2019-05-15T11:27:46Z" name: my-nginx namespace: default resourceVersion: "4506484" selfLink: /api/v1/namespaces/default/services/my-nginx uid: 7663ee3f-7704-11e9-ae79-32e210bf1ca6 spec: clusterIP: 10.99.90.61 ports:

image

shettyg commented 5 years ago

Is your kube-proxy running? If so, that may be adding them. ovn-kubernetes will only add openflow flows if you use br-localnet for N/S traffic.

girishmg commented 5 years ago

Like @shettyg said, we don't need kube-proxy for any of the gateway modes we support. So, I would remove kube-proxy like below

kubectl delete ds -n kube-system kube-proxy
danwinship commented 4 years ago

Historically there were problems with nodeport services, however they are known to work in (properly-installed) clusters now, and our CI verifies this.

It's not clear how much this bug report is hitting actual old (now fixed) ovn-kubernetes nodeport bugs, and how much of this was just broken cluster (eg, running kube-proxy with ovn-kubernetes)