Open Bit-Warrior-X opened 5 days ago
it seems like your ipip interface is not working, so the IPIP packet is not processed in the backend server. Did you follow through on the steps from the guidance here ?https://github.com/facebookincubator/katran/blob/main/EXAMPLE.md#configuration-of-forwarding-plane Could you share the network configuration on your backend server? And could you please capture tcpdump from any interface?
tcpdump -ni any proto 4 -vvv or host 31.3.7.2
Hi @swettoth0812 Thank you for your kind response.
This is ip inferface in my backend server:
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 31.3.7.2/32 scope global lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
link/tunnel6 :: brd :: permaddr be92:fafc:2800::
9: ipip0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 127.0.0.42/32 scope host ipip0
valid_lft forever preferred_lft forever
10: ipip60@NONE: <NOARP,UP,LOWER_UP> mtu 1452 qdisc noqueue state UNKNOWN group default qlen 1000
link/tunnel6 :: brd :: permaddr 6e20:e1a4:6f12::
inet6 fe80::6c20:e1ff:fea4:6f12/64 scope link
valid_lft forever preferred_lft forever
280: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:1f:03:07:96 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 31.3.7.150/24 brd 31.3.7.255 scope global eth0
valid_lft forever preferred_lft forever
It was my fault
I didn't try this command in backend server
for sc in $(sysctl -a | awk '/\.rp_filter/ {print $1}'); do echo $sc ; sudo sysctl ${sc}=0; done
After trying this command in backend server, everything works fine now Thanks
Anyway, I have another 2 questions.
I will wait for your kind response
Best regards.
Why do we use ipip tunnel even for packet forwarding? Is there any other solution that we don't use ipip tunnel for packet forwarding? Because it seems reduce the performance.
IPIP tunnel allows the reals
to be on different subnets.
You could go through this blog (https://fedepaol.github.io/blog/2023/09/06/ebpf-journey-by-examples-l4-load-balancing-with-xdp-and-katran/). This explains how Katran works.
And I am not sure why VIP must be configured on backend server.
VIP inside the backend server prevents the kernel from dropping the packet. You can imagine that the INER IP PACKET contains source ip as the client
and dest ip as the VIP
. If you don't add the VIP to the backend server, it doesn't know that it is the one who need to process the packet.
Hello everyone
I have installed Katran Load balancer and used example_grpc for testing. This is the topology what I have used.
I have configured Katran like below:
I am working on user side (31.3.7.140) and trying to connect to backend real server using ssh command
ssh 31.3.7.2
I have noticed that the syn packet is arrived correct on backend server side
And this is packet captued on user side (31.3.7.140)
As you can see, I thought that on backend server side, it should send response for ssh request. But I can't capture any packets like syn,ack for syn request. Why?
I want to get more ideas from yours who have much experience in Katran Load Balancer. Thanks