Closed kapilt closed 9 years ago
alternatively openflow
Weave operates at L2, as described at https://github.com/zettio/weave#how-does-it-work. It acquires knowledge of which MAC addresses are located at what peer hosts by looking at the Ethernet headers of captured packets, and at the weave encapsulation of packets it receives from other peers. And it routes packets based on the same sources of information.
Are you suggesting that the openvswitch and openflow APIs would give us a way to do all that w/o having to lift the entire packets into user space?
And what about crypto? That certainly requires the entire packet, so would have to be completely delegated.
fair enough, primary concern was performance of this approach and the number of context switches for per packet handling. Some benchmarks for latency and bandwidth in the docs would be useful to quantify for users. The exposed weave ui and worklfow for the combined functionality though is pretty compelling. For non app isolation, ie. gretap devices over ipsec would give the crypto and tunneling mostly in kernel space. re openvswitch/openflow yes minus encryption functionality bulk of the packets wouldn't need to come through userspace, just new flows/routes but their also a more complicated setup.
On Mon, Sep 8, 2014 at 11:03 AM, rade notifications@github.com wrote:
Weave operates at L2, as described at https://github.com/zettio/weave#how-does-it-work. It acquires knowledge of which MAC addresses are located at what peer hosts by looking at the Ethernet headers of captured packets, and at the weave encapsulation of packets it receives from other peers. And it routes packets based on the same sources of information.
Are you suggesting that the openvswitch and openflow APIs would give us a way to do all that w/o having to lift the entire packets into user space?
And what about crypto? That certainly requires the entire packet, so would have to be completely delegated.
— Reply to this email directly or view it on GitHub https://github.com/zettio/weave/issues/36#issuecomment-54833252.
primary concern was performance of this approach and the number of context switches for per packet handling.
Right. There is certainly a cost. I shall keep this issue open for now, so we can look more closely at openvswitch and openflow at some point and ascertain how much of weave's functionality we could implement going that route, while retaining the same, simple surface CLI as at present.
Some benchmarks for latency and bandwidth in the docs would be useful
Yes. We've run some benchmarks, obviously, but not particularly rigorously. I have filed issue #37 for this.
@rade
openvswitch and openflow can be used in this context, they do need an external controller or a control plane to handle the Packet In's to the control plane. Routing can be injected into the openvswitch via ovsdb protocol or openflow, and typically tunnels/encapsulations mechanisms are used to each network/subnet to provide encryption, isolation and overlapping ip spaces between networks. Which can provide multi-tenancy within this model by (for instance) allowing multiple 10.0.0.0 networks to exist. This can also lighten the load/job of the weave router in my opinion and even handle multicast/broadcast/unknown-unicast.
figured i'd throw and example here, using openvswitch and an external control plane for routing , container to container traffic performance is:
root@edc8daaa4c44:/# iperf -c 192.168.1.24
------------------------------------------------------------
Client connecting to 192.168.1.24, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.40 port 56457 connected with 192.168.1.24
port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 18.5 GBytes 15.9 Gbits/sec
And Container <--> External node (via gateway node, which is a software router on a physical host or in a VM)
root@a1625d5f6c62:/# iperf -c 10.x.x.x -P 1 -i 1 -p 5001 -f
iperf: option requires an argument -- f
------------------------------------------------------------
Client connecting to 10.x.x.x, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.36 port 47589 connected with 10.x.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 114 MBytes 955 Mbits/sec
[ 3] 1.0- 2.0 sec 112 MBytes 944 Mbits/sec
[ 3] 2.0- 3.0 sec 111 MBytes 933 Mbits/sec
[ 3] 3.0- 4.0 sec 112 MBytes 944 Mbits/sec
[ 3] 4.0- 5.0 sec 112 MBytes 936 Mbits/sec
[ 3] 5.0- 6.0 sec 112 MBytes 943 Mbits/sec
and a note that ~120MB/s seen to external node is this infrastructures limit to the tunnels(STT/GRE) using openvswitch for our environment.
It really depends on what your trying to do, but b/c this works at L2 you should definitely take a look and see if it interests the direction of the project.
@wallnerryan
What resources would you recommend to learn about openvswitch from the perspective of a control plane developer? Would you recommend the codebase of any of the virtualization systems that support openvswitch (or anything else that might form a model for its use in weave) as particularly approachable?
I've skimmed the openvswitch site, but the information there seems sketchy and not directly relevant to weave. But I could be overlooking something.
@dpw - read the OpenFlow spec. It'll give you an idea of the packet flow and what the control plane is responsible for. Also, this was a helpful tutorial to start off for developing a control plane.. There's also a good tutorial/lab exercise from Stanford.
Hello weave team ! Just to say I'm interested on that topic too and I would like to know the status of this request...
Kind regards ;)
Using openvswitch would involve rewriting a large portion of weave, and preserving all of the functionality that weave currently provides would not be trivial. It would also add a significant dependency to weave.
With that said, we are looking at ways to improve weave for applications that require high network throughput or low latency. When we have something to announce, we will link to it in this issue.
@dpw : thank you for this clear answer :)
This raise my curiosity anyway. When you have some time could you give us more inputs on the difficulties you see on the openvswitch integration with weave/docker ?
Thank you again.
Cheers
This raise my curiosity anyway. When you have some time could you give us more inputs on the difficulties you see on the openvswitch integration with weave/docker ?
Openvswitch integration with docker is quite a different matter from openvswitch integration with weave. There are no particular difficulties in integrating openvswitch with docker. Indeed, the openvswitch repository contains a shell script to provide a simple form of integration with docker.
But when we consider how to evolve weave, we'd like to retain the existing functionality and ease of use. Of particular note are features like:
So in general, weave aims to work well even when the underlying network presents difficulties. This is not the traditional domain of openvswitch, which tends to assume that the users are network administrators who have a high degree of control of the underlying network environment. So there's a gap there that we would have to bridge.
And there is the less fundamental but significant issue that integrating weave hand openstack would involve a lot of development effort.
we are looking at ways to improve weave for applications that require high network throughput or low latency. When we have something to announce, we will link to it in this issue.
Fixed by #1438.
ie. you just want to route the flows not process every packet in userspace.