Closed guimvmatos closed 4 years ago
Dear Guilherme, we need more details about your configuration to understand what is going on. However, we can give you some clarifications and suggestions on how the network and the hosts should be configured:
1) if you do not have a specific reason to use the "inline" mode, please consider to use the "encap" mode. In this way you'll avoid troubles when you have to decap the packets;
2) Link-local addresses (fe80::) are valid only for communications within the network segment (or in the broadcast domain) where the host is connected to. Such addresses are not guarantee to be unique beyond their network segment and, for this reason, routers will not forward the packets with link-local addresses. Please, consider to use another addressing scheme, i.e: fc00::/64 looks good for your needs;
3) you chose a SID list made of 1 SID and it means that you cannot apply any SRv6 End function on node2. By definition, the SRv6 End function can be applied only if the Segment Left (SL) is greater or equal to 1. If the SRv6 End function is applied on a packet with SL=0, the packet is silently dropped;
4) IF you really want to perform the SRv6 End function on node2 AND you want to decap the packet on node3 THEN you have to use two SIDs (at least). In order to perform the decap operation on node3, we suggest you to use one of the SRv6 decap behaviors available in the kernel. For example, you can use the SRv6 End.DT6 and specify the table used to to carry out the route lookup once the packet has been decapsulated;
5) consider to keep separate the addresses used for the SIDs from the addresses assigned to the host interfaces. By default (and without any complicated trick), local delivery will take the precedence over the SRv6 behavior processing. As a result, defined behaviors will not be executed.
A description of your topology can help us to understand a bit better your scenario and your needs.
Ciao, Andrea
Hi,
I want to create a srv6 packet on host1, and send to host3 passing through host2. If I understand correctly, I have to create that route on host1 with line below: ip -6 route add fc00::3/128 encap seg6 mode encap segs fc00::2 dev enp0s8 and then host 2 will receive and forward this packet (decrementing SL), so host3 will look at the packet, and as SL = 0, host 3 will remove that header and process the packet (reply ping, as example).
all my settings:
host 1 ip -6 addr add fc00::1/64 dev enp0s8 ip link set dev enp0s8 up ip -6 neigh add fc00::2 lladdr 00:00:00:00:00:02 nud permanent dev enp0s8 ip -6 neigh add fc00::3 lladdr 00:00:00:00:00:03 nud permanent dev enp0s8 ip -6 neigh add fc00::4 lladdr 00:00:00:00:00:04 nud permanent dev enp0s8 sudo sysctl -w net.ipv6.conf.all.accept_source_route=1 sudo sysctl -w net.ipv6.conf.enp0s8.seg6_require_hmac=-1 sudo sysctl -w net.ipv6.conf.all.forwarding=1 sudo sysctl -w net.ipv6.conf.enp0s8.seg6_enabled=1 sudo sysctl -p ip -6 route add fc00::3/128 encap seg6 mode encap segs fc00::2 dev enp0s8
host 2 ip -6 addr add fc00::2/64 dev enp0s8 ip link set dev enp0s8 up ip -6 neigh add fc00::1 lladdr 00:00:00:00:00:01 nud permanent dev enp0s8 ip -6 neigh add fc00::3 lladdr 00:00:00:00:00:03 nud permanent dev enp0s8 ip -6 neigh add fc00::4 lladdr 00:00:00:00:00:04 nud permanent dev enp0s8 sudo sysctl -w net.ipv6.conf.all.accept_source_route=1 sudo sysctl -w net.ipv6.conf.enp0s8.seg6_require_hmac=-1 sudo sysctl -w net.ipv6.conf.all.forwarding=1 sudo sysctl -w net.ipv6.conf.enp0s8.seg6_enabled=1 sudo sysctl -p
host 3 ip -6 addr add fc00::3/64 dev enp0s8 ip link set dev enp0s8 up ip -6 neigh add fc00::1 lladdr 00:00:00:00:00:01 nud permanent dev enp0s8 ip -6 neigh add fc00::2 lladdr 00:00:00:00:00:02 nud permanent dev enp0s8 ip -6 neigh add fc00::4 lladdr 00:00:00:00:00:04 nud permanent dev enp0s8 sudo sysctl -w net.ipv6.conf.all.accept_source_route=1 sudo sysctl -w net.ipv6.conf.enp0s8.seg6_require_hmac=-1 sudo sysctl -w net.ipv6.conf.all.forwarding=1 sudo sysctl -w net.ipv6.conf.enp0s8.seg6_enabled=1 sudo sysctl -p
After that, I can ping host1 / host2, host2 / host3. but when I try to ping host1 / host3 (packet with header srv6), that packet stops at host2, link below the image
I already try with localsid table and End function on host2, like my first post.
Am I missing something?
Hi, It would have been better to have a network diagram to figure out what you want to achieve.
BTW, If you want to steer packets through host2 and then through host3, you have to use a SID list made of 2 SIDs, i.e: <fc00::2,fc00::3>. Indeed, the fc00::2 is used for applying the End function on node2 while the fc00::3 is used for the decap part.
Note that the "End" function is applied only if the SL>0; if the SL=0 the End function will discard the packet.
Also remember to set the net.ipv6.conf.all.seg6_enabled=1 on your hosts.
Ciao, Andrea
Hi again,
I tried almost everything I found. I already read everything on this link: https://segment-routing.org/. And I still don't understand. With this image, it’s easy to understand what I want. My doubts are, according to the scenario, what functions do I need on hosts 2 and 3?
Do I need to create a localsid table in this case? Do I need SRext? or just with kernel implementation I can forward packet and decrement SL?
Hi Guilherme, I've got what you want to achieve but there are some missing details in that image.
1) How such nodes are (physically) connected to each other?
To make the things easier, is it the case (a) or the case (b) ?
2) I also need to have the IPv6 addresses that you provided to the interfaces on each node.
Ciao, Andrea
Do I need to create a localsid table in this case?
For the question about the localsid table, I'm waiting for the details I've asked for. However, in many cases you can do the same things without the need of the localsid table.
Do I need SRext? or just with kernel implementation I can forward packet and decrement SL?
For your use case, the Linux kernel provides you everything you need. So you don't need to use anything else such as SRext.
Andrea
All of these hosts are VMs on the same internal network on virtualbox. So, we can say that I'm using a star topology for now.
host1 enp0s8: fc00::1/64 host2 enp0s8: fc00::2/64 host3 enp0s8: fc00::3/64 host4 enp0s8: fc00::4/64
And I only have these IPv6 addresses and I'm not using namespaces. is it correct?
In my second post, you can see all the settings for hosts 1, 2 and 3. Remember that my real scenario is with four hosts, not three. I just tried with 3 hosts to simplify things
in your first post, on item 5, you say:
"consider to keep separate the addresses used for the SIDs from the addresses assigned to the host interfaces. By default (and without any complicated trick), local delivery will take the precedence over the SRv6 behavior processing. As a result, defined behaviors will not be executed."
I don't understood this. Could you explain, please?
how can I keep separate addresses for interfaces and SIDs?
in your first post, on item 5, you say:
"consider to keep separate the addresses used for the SIDs from the addresses assigned to the host interfaces. By default (and without any complicated trick), local delivery will take the precedence over the SRv6 behavior processing. As a result, defined behaviors will not be executed."
I don't understood this. Could you explain, please?
By default, the routing policy gives the local table the highest priority for route lookup operations (look at the 'ip -4 rule show' and 'ip -6 rule show'). Local table is a 'special routing table' identified by the ID 255 and it is managed by the kernel. The local tables contains local and broadcast addresses.
When you receive a packet, the kernel first looks into the local table. If there is a route whose prefix matches with the DA of the packet, then the route is selected and used for delivering the packet.
If there is no candidate route in the local table, the routing process can continue by considering the next routing tables (depending on the policy routing rules). For example, the main table is considered after the kernel verified that the incoming packet cannot be locally delivered.
With this in mind, when you use a SID for one of your interfaces, you are telling the kernel "this address is local and manage it properly". If you use the same SID also for setting up a SRv6 behavior instance, the corresponding route will go in a table which has a 'lower priority' with respect to the local table. As a consequence, the SRv6 behavior instance will never be triggered.
You may be tempted to insert SRv6 routes in the local table or even also to alter the table priority playing a bit with the policy routing. In general those are not good ideas for different reasons.
Rather, use two different routing addressing schema: i) one for internal node reachability , ii) the other for the Segment routing functions.
how can I keep separate addresses for interfaces and SIDs?
In your example, you can still continue to use fc00::/64 for your internal network. You can use, for instance, a different network for SRv6 functions such as fcf0:xy::/64 (or whatever you prefer).
BTW, why did you choose a star topology for testing the SRv6? wouldn't be more interesting if such hosts were connected using different networks?
You say: "Rather, use two different routing addressing schema: i) one for internal node reachability , ii) the other for the Segment routing functions. "
Ok, but, how I do that in practice? (Sorry for my ignorance! I'm still learning =/)
I did choose a star topology because I'll have to test my P4 switch (master's project). My switch will classify ipv6/gtp flows to srv6 sids, making an SFC. It's almost ready, but now, my biggest problem is the one you're helping me with.
Hi, I took your configuration and I setup a simple script for simulating your testbed. Rather than using VirtualBox and VMs, I used network namespaces but this is only for my convenience. Each network namespace corresponds to one of your nodes, excepts for the P4 switch that I replaced with a node that acts as a simple switch. You can refer to the configuration of each network namespace and try to apply such configuration to the right VM.
Script: srv6_topo_4host1switch.sh.txt
To run this script, you need to have tmux installed in your system.
Ciao, Andrea
PS: We have a web page where you can learn more about SRv6 and the related projects. Here is the link: https://netgroup.github.io/rose/ In addition, you can also download a VM which comes with everything needed for 'playing' with the SRv6 ecosystem: http://rose-repo.netgroup.uniroma2.it/vm/rose-srv6.ova.
It works now!! hahaha You led me to better understand SRv6 and its functions. I learned a lot here with you. You are the best! thank you very much! And congratulations for Rose project, it is very interesting. When I finish the master's project, I promise I will show it to you!
Hi, I am glad to help. Good luck for your project and I would be happy to see the final result!
Ciao, Andrea
Hi, I am glad to help. Good luck for your project and I would be happy to see the final result!
Ciao, Andrea
@skorpion17 Hi! Sorry to interrupt, your script on the star topo also helps me a lot. I am just wondering if it's possible that you might do another script on the linear topo? I am currently facing the similar issues on a linear topo that the srv6 doesn't work. Really appreciate. My scenario is quiet simple, with two host with public ipv6 address, I want the overlay ipv4 to connect to each other wrapped by the srv6 encap. So on the left VM(with sid fc00:1::) I have these command
ip link add sdn0 type dummy ip addr add 3.3.3.3/24 dev sdn0 ip link set up dev sdn0
ip -6 rule add to fc00:3::/64 lookup localsid ip -6 route add blackhole default table localsid ip -6 route add fc00:3::/64 via 2a05:d018:e49:5e00:f3c:eeb7:4dfe:eb41 table localsid ip -6 route add fc00:1::/64 encap seg6local action End.DX4 nh4 3.3.3.3 dev sdn0 table localsid ip route add 3.3.3.0/24 encap seg6 mode encap segs fc00:3:: dev eth0
and on the right (with sid fc00:3::) I have these
ip link add sdn0 type dummy ip addr add 3.3.3.4/24 dev sdn0 ip link set up dev sdn0
ip -6 rule add to fc00:1::/64 lookup localsid ip -6 route add blackhole default table localsid ip -6 route add fc00:1::/64 via 2a05:d018:e49:5e00:9d23:688e:b209:c6a2 table localsid ip -6 route add fc00:3::/64 encap seg6local action End.DX4 nh4 3.3.3.4 dev sdn0 table localsid ip route add 3.3.3.0/24 encap seg6 mode encap segs fc00:1:: dev eth0
But when I ping from left to right, and do a tcpdump on the right, I couldn't even capture the srv6 packet, and when I do the tcpdump in the left one, the srv6 dst is still fc00:3:: and not correct IP. I am just wondering where did I do wrong?
Hi, if I understood well your topology seems like here:
VM left VM right
--------- ---------
| | | |
| d +---------+ d |
| ^ |^ ^| ^ |
--|------| |------|--
| | 2a05:..:eb41 |
| 2a05:..:c6a2 |
3.3.3.3 3.3.3.4
right?
The first question: Why you use the dummy interface into the VMs? The Second question: Where, in your topology, the Wireshark screenshot is probed?
Hi, if I understood well your topology seems like here:
VM left VM right --------- --------- | | | | | d +---------+ d | | ^ |^ ^| ^ | --|------| |------|-- | | 2a05:..:eb41 | | 2a05:..:c6a2 | 3.3.3.3 3.3.3.4
right?
The first question: Why you use the dummy interface into the VMs? The Second question: Where, in your topology, the Wireshark screenshot is probed?
Hi Paolo Lungaroni, thanks for the reply! The topology is correct. For the first question, I just wanna create an overlay ipv4 network, and let the srv6 do the routing in the underlay real-world network. I create the dummy interface with the ipv4 IP address to do this, and yet I am new to linux networking, I really don't know if I am doing this correctly. For the second question, I do the pcap on the interface with the IPv6 address, so it is at the exit of the VM network.
Hi wzrf, sorry for the delay. I replicated your topology on our testbed and I noticed two issue:
ip rule
in each vm.for first issue the seg6local rule isn't never matched because is located in localsid table and there are no rules that lookup in this table. for left vm you can add ip -6 rule add to fc00:1::/64 lookup localsid
and for right vm you can add ip -6 rule add to fc00:3::/64 lookup localsid
.
for the second issue the encap rule ip route add 3.3.3.0/24 encap seg6 mode encap segs fc00:3:: dev eth0
you used the /24 subnet, but this goes in conflict with the prexistent rule (RTNETLINK answers: File exists). use /32 on each encap rule.
Hi @StefanoSalsano ,
I have a little problem. I have 3 linux hosts with ubuntu 20 (I already tried debian 10 and fedora). And I'm trying to make srv6 work on these hosts. The environment looks something like this: host 1: ip -6 route add fe80::3 encap seg6 mode inline segs fe80::2 dev enp0s8
host 2: receives the package. but after that, nothing is happening. I want host 2 to forward the packet to host 3. Why isn't this happening? echo 100 localsid >> /etc/iproute2/rt_tables ip -6 rule add to fe80::/64 lookup localsid ip -6 route add blackhole default table localsid ip -6 route add fe80::/64 encap seg6local action End dev enp0s8 table localsid