olopez32 / ganeti

Automatically exported from code.google.com/p/ganeti
0 stars 0 forks source link

Support vhost-net and macvtap #167

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
Please add support in ganeti for setting up vhost-net networking and macvtap 
networking, to reduce the overhead of bridging in the host.

http://fedoraproject.org/wiki/Features/VHostNet
http://virt.kernelnewbies.org/MacVTap

Original issue reported on code.google.com by an...@hpc2n.umu.se on 16 Jun 2011 at 8:59

GoogleCodeExporter commented 9 years ago
vhost net is implemented, at least.

Original comment by ultrot...@gmail.com on 16 Jun 2011 at 11:11

GoogleCodeExporter commented 9 years ago
Ah. I found vhost-net in the News file, and in the code, but not in the 
manpages. 

Original comment by an...@hpc2n.umu.se on 16 Jun 2011 at 11:17

GoogleCodeExporter commented 9 years ago
Ah, then those need to be updated. Will try to push a patch soon.

Original comment by ultrot...@gmail.com on 16 Jun 2011 at 2:55

GoogleCodeExporter commented 9 years ago

Original comment by ius...@google.com on 19 Jul 2012 at 1:40

GoogleCodeExporter commented 9 years ago

Original comment by ultrot...@google.com on 20 Dec 2012 at 12:28

GoogleCodeExporter commented 9 years ago

Original comment by ultrot...@google.com on 10 Apr 2013 at 6:10

GoogleCodeExporter commented 9 years ago
what's the current status of this?

Original comment by neal.oa...@googlemail.com on 21 Jul 2014 at 11:24

GoogleCodeExporter commented 9 years ago
As far as I know, no one is actively working on
this (lack of time). But Guido should know better.

Original comment by aeh...@google.com on 22 Jul 2014 at 7:35

GoogleCodeExporter commented 9 years ago
Indeed. This can be good for a "smalltask" perhaps, as a starter project or for 
an external contributor who's interested?

Original comment by ultrot...@google.com on 22 Jul 2014 at 2:19

GoogleCodeExporter commented 9 years ago
Hi,

Currently I'm evaluating LXC network performance (10G ethernet). I use a 
classical network stack on the node: 

bonding (with LACP) -> VLAN (802.1Q) -> bridge -> VETH (LXC). 

I measured (iperf), that this stack has performance problems. I can just get 
about 3Gb/s out of this. The same is true for me with TAP (KVM). I've heard, 
that this is known and one solution is to use the openvswitch bridge, to get 
better performance. Because I can't use openvswitch (to old "enterprise" 
kernel), I focused on macvlan. I modified the network stack to look this way:

bonding (with LACP) -> VLAN (802.1Q) -> macvlan in bridge mode (LXC)

Therefore I hacked in hv_lxc.py:

  def _CreateConfigFile
  ...
- if mode == constants.NIC_MODE_BRIDGE:
+ if mode == constants.NIC_MODE_OVS:
-       out.append("lxc.network.type = veth")
+       out.append("lxc.network.type = macvlan")
+       out.append("lxc.network.macvlan.mode = bridge")
        out.append("lxc.network.link = %s" % link)
      else:
        raise errors.HypervisorError("LXC hypervisor only supports"
-                                    " bridge mode (NIC %d has mode %s)" %
+                                    " openvswitch mode (NIC %d has mode %s)" %
                                     (idx, mode))

This was enough to get near wire speed (9.8Gb/s) from inside a container to a 
physical system (not the node). The OVS hack was nessesary to get rid of 
checking if a bridge exists.

Adding a new network mode of type macvlan with one adidtional paramter (macvlan 
modus: vepa, bridge, private), the LXC and KVM (macvtap) integration should be 
easy, but ATM I'm not a developer, just an admin hacking...

Thanks, Sascha.

Original comment by sascha.l...@gisa.de on 9 Mar 2015 at 10:41