larsks / blog.oddbit.com

3 stars 0 forks source link

post/2019-12-19-ovn-and-dhcp/ #8

Open utterances-bot opened 4 years ago

utterances-bot commented 4 years ago

OVN and DHCP: A minimal example · The Odd Bit

Introduction A long time ago, I wrote an article all about OpenStack Neutron (which at that time was called Quantum). That served as an excellent reference for a number of years, but if you've deployed a recent version of OpenStack you may have noticed that the network architecture looks completely different. The network namespaces previously used to implement routers and dhcp servers are gone (along with iptables rules and other features), and have been replaced by OVN (“Open Virtual Network”).

https://blog.oddbit.com/post/2019-12-19-ovn-and-dhcp/

flavio-fernandes commented 4 years ago

Nice post! On the subject of doc/info, it got removed from Openvswitch site when OVN split into its own repo. You can get to the OVN doc in github, but we will eventually have a dedicated page for under [ovn.org]().

larsks commented 4 years ago

It would be great to have that hosted somewhere more discoverable :).

flavio-fernandes commented 4 years ago

I would like to offer a small contribution for folks interested in trying the steps described by this great page: a Vagrantfile

If that interests you, see: https://gist.github.com/flavio-fernandes/862747708512c1967c7b412f500fb56d

And a couple of comments to make this page even more awesome:

I'm not sure about the difference between the ovs version you used the one I used, but I think that 'ovn-remote' should include the protocol and port. In other words:

central='192.168.122.100' && \
ovs-vsctl set open . external-ids:ovn-remote=tcp:${central}:6642

I think the way you used the command 'ovn-nbctl dhcp-options-create' is not doing what we need. For this particular command, all the parameters provided after the cidr are stored as external_ids and these are not used in the dhcp offer. They should be part of the options column. In summary, I think this is what you may need to do:

ovn-nbctl dhcp-options-create 10.0.0.0/24

CIDR_UUID=$(ovn-nbctl --bare --columns=_uuid find dhcp_options cidr="10.0.0.0/24")

ovn-nbctl dhcp-options-set-options ${CIDR_UUID} \
  lease_time=3600 \
  router=10.0.0.1 \
  server_id=10.0.0.1 \
  server_mac=c0:ff:ee:00:00:01

ovn-nbctl list dhcp_options command should list the options under 'options' and not 'external_ids'.

As you do below, you can instead set it straight into the NB db with the command 'ovn-nbctl create dhcp_options'.

I am a bit of a lazy typer, so I used that command to grab the uuid of the row dhcp_options. I wonder if you would find it interesting to tweak that part of this page to make it easier to follow. Just a suggestion:

ovn-nbctl lsp-set-dhcpv4-options port1 ${CIDR_UUID}

I see a tiny little discrepancy on the mac you gave to port1 in your page. I think you meant to use c0:ff:ee:00:00:11 instead of c0:ff:ee:00:00:10, right?

ovn-nbctl lsp-set-addresses port1 "c0:ff:ee:00:00:10 dynamic"   ; # maybe c0:ff:ee:00:00:11 ?

Another nit: There is no 's' in the logical_switch_ports table. So the command should be

ovn-nbctl list logical_switch_port

Lastly, since vm has eth0 reserved for mgmt access, the command to configure the ovn-encap-ip needed a small tweak on my Vagrant based vm cluster:

ETH_DEV=$(ip route get 192.168.122.0 | grep -oP "(?<= dev )[^ ]+")

ovs-vsctl set open-vswitch .  \
   external_ids:ovn-encap-ip=$(ip addr show $ETH_DEV | awk '$1 == "inet" {print $2}' | cut -f1 -d/)

Best!

-- flaviof

larsks commented 4 years ago

Flavio,

Thanks for the comments!

I'm not sure about the difference between the ovs version you used the one I used, but I think that 'ovn-remote' should include the protocol and port.

Yeah, good catch; that was just a typo. My running environment actually has the protocol and port.

I think the way you used the command 'ovn-nbctl dhcp-options-create' is not doing what we need.

Ugh. A dhcp-options-create command that doesn't actually let you set options seems sadistic.

I actually used the ovn-nbctl create ... command when I set up my environment, but I was looking for a simpler mechanism, since the quoting requirements for the database commands can sometimes verge on the ridiculous.

I've updated the post to just use what you suggested.

I think you meant to use c0:ff:ee:00:00:11 instead of c0:ff:ee:00:00:10, right? Another nit: There is no 's' in the logical_switch_ports table.

Yeah, I've fixed those, too.


Thanks for taking a look and correcting things!

flavio-fernandes commented 4 years ago

Sorry, me again. :^) I came across a good read on how OVN implements the DHCP functionality and thought of sharing it here: https://blogs.rdoproject.org/2016/08/native-dhcp-support-in-ovn/ Enjoy!

stephen144 commented 4 years ago

Thanks for the post. FYI, Fedora has since split out the ovn package into several. In addition to the ovn package there is ovn-host which has the ovn-controller service and ovn-central with the ovn-northd service. I had to install all three to go though this.

gstanden commented 3 years ago

Awesome post @larsks and also big ups to @flavio-fernandes for the great comments.
Question: How do you hook this up to bind9 DNS for dynamic DNS zone updates (if that's possible) ?
TIA

larsks commented 3 years ago

@gstanden I think the idea is that whatever configures the static DHCP leases in OVN would also be responsible for setting up the necessary DNS entries (that is, you wouldn't try setting up some kind of script to respond to new leases like you would with ISC DHCPd or something).

gstanden commented 3 years ago

Here's another question: If I use OVS (and not OVN) and create a geneve tunnel like this:

sudo ovs-vsctl add-port sw1 geneve201 -- set interface geneve201 type=geneve options:remote_ip=192.168.1.144 options:key=flow

then I am able to ping each physical host at the endpoints from the other host, if you will, "in the usual way." Those OVS ports looks like this:

    Port geneve201
        Interface geneve201
            type: geneve
            options: {key=flow, remote_ip="192.168.1.143"}

    Port geneve201
        Interface geneve201
            type: geneve
            options: {key=flow, remote_ip="192.168.1.144"}

and so ping works in the usual way (not in a separate namespace), e.g.

ubuntu@u20sv1:~$ ifconfig genev_sys_6081 genev_sys_6081: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 65000 inet6 fe80::1cbd:6dff:fe40:591 prefixlen 64 scopeid 0x20 ether 1e:bd:6d:40:05:91 txqueuelen 1000 (Ethernet) RX packets 1157 bytes 85675 (85.6 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1783 bytes 231412 (231.4 KB) TX errors 0 dropped 17 overruns 0 carrier 0 collisions 0

ubuntu@u20sv1:~$ ifconfig sw1 sw1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.209.53.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::809d:afff:fe63:dd4b prefixlen 64 scopeid 0x20 ether 82:9d:af:63:dd:4b txqueuelen 1000 (Ethernet) RX packets 1585 bytes 132508 (132.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 963 bytes 147579 (147.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ubuntu@u20sv1:~$ ping -c 3 -I sw1 10.209.53.201 PING 10.209.53.201 (10.209.53.201) from 10.209.53.1 sw1: 56(84) bytes of data. 64 bytes from 10.209.53.201: icmp_seq=1 ttl=64 time=2.40 ms 64 bytes from 10.209.53.201: icmp_seq=2 ttl=64 time=1.26 ms 64 bytes from 10.209.53.201: icmp_seq=3 ttl=64 time=1.20 ms

--- 10.209.53.201 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms rtt min/avg/max/mdev = 1.196/1.617/2.400/0.554 ms ubuntu@u20sv1:~$

However, if I switch over to OVN and create the geneve tunnel ports using your provided code (thanks!!) the tunnels seem to "look" the same, but I cannot ping one host from the other in the same way shown above when using OVS.

    Port ovn-40f7df-0
        Interface ovn-40f7df-0
            type: geneve
            options: {csum="true", key=flow, remote_ip="192.168.1.143"}

ubuntu@u20sv3:~$ ifconfig genev_sys_6081 genev_sys_6081: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 65000 inet6 fe80::6024:a1ff:feab:d26 prefixlen 64 scopeid 0x20 ether 62:24:a1:ab:0d:26 txqueuelen 1000 (Ethernet) RX packets 4960 bytes 142956 (142.9 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3810 bytes 108176 (108.1 KB) TX errors 0 dropped 24 overruns 0 carrier 0 collisions 0

I'm using these networks for containers for the Github Orabuntu-LXC project. I'm looking at switching over to OVN because for one thing currently Orabuntu-LXC uses a HUB and SPOKE tunnel arrangement for multi-host, and switching to OVN would allow to readily implement, if you will, a "mesh" of tunnels where all hosts are linked to all other hosts, which would be clugy to do the way Orabuntu-LXC currently implements the tunnels (without benefit of the north and south db that OVN offers).

But I think I'm missing a few puzzle pieces here in that creating what appears to be the "same" tunnels in OVS vs OVN turns out to be not the same in terms of connectivity functionality.

Just any general thoughts folks might have on that could be very helpful.
TIA

pyite commented 2 years ago

Is there a way to use tcpdump to see the DHCP traffic?

I typically do something like "tcpdump -ni any port 68" to make sure that DHCP packets are flowing, but so far OVN is much more painful to troubleshoot because I haven't yet found a way to see any activity.

larsks commented 2 years ago

You can create a mirror of internal ports and apply tcpdump to the mirror, as described in https://wiki.openstack.org/wiki/OpsGuide/Network_Troubleshooting. Does that help? As I mentioned in the post, I found ovn-trace to be useful as a diagnostic tool.