cirocosta / monero-operator

A Kubernetes-native way of deploying Monero nodes and even whole networks: express your intention and let Kubernetes run it for you.
https://www.getmonero.org/
Apache License 2.0
19 stars 2 forks source link

moneronodeset p2p: no incoming connections when behind a service #10

Closed cirocosta closed 3 years ago

cirocosta commented 3 years ago

being behind a service, we must provide that ip as the one for others to connect to.

it'll be interesting to verify which IP is actually being offered - not really familiar with that part of the codebase

related: https://github.com/monero-project/monero/pull/7707

cirocosta commented 3 years ago

some logs:

2021-05-24 12:40:28.117 [P2P8]  INFO    net.p2p src/p2p/net_node.inl:2549       [10.244.0.1:50919 INC] CONNECTION FROM 10.244.0.1 REFUSED, too many connections from the same address
2021-05-24 12:40:28.117 I [10.244.0.1:50919 INC] CONNECTION FROM 10.244.0.1 REFUSED, too many connections from the same address
2021-05-24 12:40:28.120 [P2P0]  INFO    net.p2p src/p2p/net_node.inl:2549       [10.244.0.1:4076 INC] CONNECTION FROM 10.244.0.1 REFUSED, too many connections from the same address
2021-05-24 12:40:28.120 I [10.244.0.1:4076 INC] CONNECTION FROM 10.244.0.1 REFUSED, too many connections from the same address
2021-05-24 12:40:28.122 [P2P1]  INFO    net.p2p src/p2p/net_node.inl:2549       [10.244.0.1:47982 INC] CONNECTION FROM 10.244.0.1 REFUSED, too many connections from the same address
2021-05-24 12:40:28.122 I [10.244.0.1:47982 INC] CONNECTION FROM 10.244.0.1 REFUSED, too many connections from the same address
2021-05-24 12:40:32.178 [P2P8]  INFO    net.p2p src/p2p/net_node.inl:2549       [10.244.0.1:19818 INC] CONNECTION FROM 10.244.0.1 REFUSED, too many connections from the same address
2021-05-24 12:40:32.178 I [10.244.0.1:19818 INC] CONNECTION FROM 10.244.0.1 REFUSED, too many connections from the same address

that makes sense - 10.244.0.1 is the address of the bridge device inside the kind-control-plane network namespace that connects the pods

e.g., we can see that to reach, say, 10.244.0.2 (a pod):

ip route get 10.244.0.2
10.244.0.2 dev veth987162b8 src 10.244.0.1 uid 0

we go through 10.244.0.1, the bridge device that has the forwarding database:

root@kind-control-plane:/# bridge fdb show
33:33:00:00:00:01 dev veth5f6a829e self permanent
01:00:5e:00:00:01 dev veth5f6a829e self permanent
33:33:00:00:00:01 dev veth987162b8 self permanent
01:00:5e:00:00:01 dev veth987162b8 self permanent
33:33:00:00:00:01 dev vethaa54635a self permanent
01:00:5e:00:00:01 dev vethaa54635a self permanent
33:33:00:00:00:01 dev veth6322497c self permanent
01:00:5e:00:00:01 dev veth6322497c self permanent
33:33:00:00:00:01 dev veth1a213ee7 self permanent
01:00:5e:00:00:01 dev veth1a213ee7 self permanent
33:33:00:00:00:01 dev vethacc19372 self permanent
01:00:5e:00:00:01 dev vethacc19372 self permanent
33:33:00:00:00:01 dev eth0 self permanent
33:33:ff:00:00:02 dev eth0 self permanent
01:00:5e:00:00:01 dev eth0 self permanent
33:33:ff:14:00:02 dev eth0 self permanent

we can confirm that taking a look at plain ip output:

root@kind-control-plane:/# ip a | grep 244.0.1 -B2
2: veth5f6a829e@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:68:02:23:ef:c9 brd ff:ff:ff:ff:ff:ff link-netns cni-b4f587a4-a521-f4f1-60af-8be02d667fb1
    inet 10.244.0.1/32 scope global veth5f6a829e
--
3: veth987162b8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether ce:e4:4c:cf:2f:b0 brd ff:ff:ff:ff:ff:ff link-netns cni-c4e1116b-bc53-084c-215b-f5395381a8fe
    inet 10.244.0.1/32 scope global veth987162b8
--
4: vethaa54635a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether aa:b6:50:48:80:27 brd ff:ff:ff:ff:ff:ff link-netns cni-5abb45a6-711f-365b-7fcb-c8ae447e1f00
    inet 10.244.0.1/32 scope global vethaa54635a
--
7: veth6322497c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 0e:f3:a1:ea:21:f2 brd ff:ff:ff:ff:ff:ff link-netns cni-5d6da78e-5461-1897-8673-0e0d42def4e4
    inet 10.244.0.1/32 scope global veth6322497c
--
8: veth1a213ee7@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 7a:fa:a8:4f:65:8f brd ff:ff:ff:ff:ff:ff link-netns cni-eeece0ff-d6e4-bb5d-536c-4cc830f07cca
    inet 10.244.0.1/32 scope global veth1a213ee7
--
9: vethacc19372@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether e6:2f:4f:60:a1:03 brd ff:ff:ff:ff:ff:ff link-netns cni-b69762d1-c348-6283-1316-65f3f5e16e87
    inet 10.244.0.1/32 scope global vethacc19372

thus, as all of the traffic will be coming from that ip from the perspective

cirocosta commented 3 years ago

well, it turns out that we can disable the snat behavior that we're seeing for nodeport:

https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport

To avoid this, Kubernetes has a feature to preserve the client source IP. If you set service.spec.externalTrafficPolicy to the value Local, kube-proxy only proxies proxy requests to local endpoints, and does not forward traffic to other nodes. This approach preserves the original source IP address. If there are no local endpoints, packets sent to the node are dropped, so you can rely on the correct source-ip in any packet processing rules you might apply a packet that make it through to the endpoint.

TIL

right after the patch, we can see inbound connections being established:

Height: 2367941/2367941 (100.0%) on mainnet, not mining, net hash 2.57 GH/s, v14, 127(out)+4(in) connections, uptime 0d 2h 6m 30s

so ... we might actually not need the patch I mentioned to before

cirocosta commented 3 years ago

fixed!