GNS3 / gns3-server

GNS3 server
GNU General Public License v3.0
811 stars 263 forks source link

Move all connection to ubridge #267

Closed julien-duponchelle closed 8 years ago

julien-duponchelle commented 9 years ago

It's a suggestion for 1.5 or 1.6.

We can move all the network connection to ubridge in order to have an unified way to manipulate network connections and having the same set of features for all.

superwolfboy commented 9 years ago

It is a great idea

grossmj commented 9 years ago

I am thinking we can have ubridge for VPCS and Qemu to allow for dynamic linking and packet captures (but we should allow to deactivate that just in case).

Then we leverage OpenvSwitch or similar to manipulate network connections (packet loss, bandwidth limitation etc.). We need to research what can be done. We may either implement that into ubridge if this isn't too complicated.

julien-duponchelle commented 9 years ago

When everything is connected to ubridge, it's easy to POC with various networking backend :D

On Tue, Jul 28, 2015 at 6:35 AM Jeremy Grossmann notifications@github.com wrote:

I am thinking we can have ubridge for VPCS and Qemu to allow for dynamic linking and packet captures (but we should allow to deactivate that just in case).

Then we leverage OpenvSwitch or similar to manipulate network connections (packet loss, bandwidth limitation etc.). We need to research what can be done. We may either implement that into ubridge if this isn't too complicated.

— Reply to this email directly or view it on GitHub https://github.com/GNS3/gns3-server/issues/267#issuecomment-125438295.

grossmj commented 9 years ago

I think it is worth adding this as the default option for 1.4 with the possibility to deactivate if needed.

grossmj commented 9 years ago

Well I am considering postponing this change to a later version. The major problem I face is to create the initial source connection in uBridge.

For instance VPCS only supports UDP tunnelling across all platforms, this would create the following situation:

VPCS instance <- Local UDP tunnel -> uBridge <- UDP tunnel -> Other node

I think this is not very clean and may cause problems since we have to allocate 2 additional local UDP ports to create the local UDP tunnel connecting to uBridge. TAP interfaces could be used instead but this would only work on Linux.

For Qemu, we could use UNIX Domain sockets as the source but this would only work on Linux and OSX.

julien-duponchelle commented 9 years ago

I think we need to create a link object in the API. When we create a node instead of connecting it to an udp port we just connect it to the link and the server will choose the best method to interconnect two node and the client doesn't need to worry about the internal communication with ubridge.

In reality we can consider a link as an ubridge instance.

In the future the link object will have the method for rate limiting, stats...

Problem a link could be on two server and for the moment two api server can not coordinate without the client. This require us to create the link on both server and sync the informations. We can choose a rule that the lower VM uuid is located is the server where the operation like rate limit should be done.

julien-duponchelle commented 9 years ago

Solution 1

Node on two different server

We can imagine a session like that (all url are under /v1/projects/{project_id})

On server 1:

POST /v1/projects/{project_id}/links
{
    "vm_1": "5456464",
    "vm_2": "1212121"
}

204
{
   "link_uuid": "464565646546",
    "vm_1": "5456464",
    "vm_2": "1212121",
   "local": false,
   "udp_in": 2001,
   "udp_out": 2003,
}

On server 2

POST /v1/projects/{project_id}/links
{
   "link_uuid": "464565646546",
    "vm_1": "5456464",
    "vm_2": "1212121",
   "local": false,
   "remote_host": "192.168.12.1",
   "remote_udp_in": 2001,
   "remote_udp_out": 2003
}

204
{
   "link_uuid": "464565646546",
    "vm_1": "5456464",
    "vm_2": "1212121",
   "local": false,
   "remote_host": "192.168.12.1",
   "remote_udp_in": 2001,
   "remote_udp_out": 2003,
   "udp_in": 3001,
   "udp_out": 3002
}

On server 1

PUT /v1/projects/{project_id}/links
{
   "link_uuid": "464565646546",
   "local": false,
   "remote_host": "192.168.12.2",
   "remote_udp_in": 3001,
   "remote_udp_out": 3002
}

204
{
   "link_uuid": "464565646546",
   "local": false,
    "vm_1": "5456464",,
    "vm_2": "1212121",
   "remote_host": "192.168.12.2",
   "remote_udp_in": 3001,
   "remote_udp_out": 3002,
   "udp_in": 2001,
   "udp_out": 2003
}

Node on single server

The link creation is simpler if we want to create a link between two node on the same server.

POST /v1/projects/{project_id}/links
{
   "local": true,
    "vm_1": "5456464",
    "vm_2": "1212121"
}

204
{
   "link_uuid": "464565646546",
   "local": true,
    "vm_1": "5456464",
    "vm_2": "1212121",
   "udp_in": 2001,
   "udp_out": 2003
}
julien-duponchelle commented 9 years ago

Solution 2

We keep the link system but drop the requirements of two UDP port by link.

We can imagine giving to ubridge only two port for external communication between ubridge servers. And in the UDP packet we add the destination vm uuid and the source vm uuid.

This allow the following flow:

POST /v1/projects/{project_id}/links
{
   "link_uuid": "464565646546",
    "vm_1": "5456464",
    "vm_2": "1212121",
   "local": false,
   "remote_host": "192.168.12.2",
   "remote_udp_in": 3001,
   "remote_udp_out": 3002
}

204
{
   "link_uuid": "464565646546",
    "vm_1": "5456464",
    "vm_2": "1212121",
   "local": false,
   "udp_in": 2001,
   "udp_out": 2003,
   "remote_host": "192.168.12.2",
   "remote_udp_in": 3001,
   "remote_udp_out": 3002
}

On server 2

POST /v1/projects/{project_id}/links
{
   "link_uuid": "464565646546",
    "vm_1": "5456464",
    "vm_2": "1212121",
   "local": false,
   "remote_host": "192.168.12.1",
   "remote_udp_in": 2001,
   "remote_udp_out": 2003
}

204
{
   "link_uuid": "464565646546",
    "vm_1": "5456464",
    "vm_2": "1212121",
   "local": false,
   "remote_host": "192.168.12.1",
   "remote_udp_in": 2001,
   "remote_udp_out": 2003,
   "udp_in": 3001,
   "udp_out": 3002
}

This time it's easier because we don't need to make an additional PUT because we already know the UDP ports of ubridge.

julien-duponchelle commented 9 years ago

Solution 3

Based on solution 2, we give to all servers the full knowledge.

We push on all servers the following informations (can be push with individual HTTP call):

{
  "servers":  {
        "78789798":  {"udp_in": 2000, "udp_out": 2001},
        "45454554":  {"udp_in": 2000, "udp_out": 2001}, 
   },
  "vms": {
     "12121212": {"server_id": "78789798"},
     "13131313": {"server_id": "78789798"},
     "14141414": {"server_id": "45454554"}
  },
  "links": {
     "56565656": ["12121212", "13131313"],
     "78787878": ["12121212", "14141414"]
  }
}

This mean when I ask to create a link between the VM 14141414 and the 12121212 both server will know where both VM is located and how to contact each other. Most of the current complexity of the networking code on the GUI is dropped in favor of servers, it's probably easier to maintain.

vjorge commented 8 years ago

Hi, Any updates on this?
I think that qemu + kvm is a open solution and performs as fast as VMware workstation, in my lab I even get better results running qemu with kvm accel than VMware workstation but the lack of ubridge support is a negative point. VMware integration is good, but qemu should not be left behind.

grossmj commented 8 years ago

We are working on this for 2.0, we will most likely support 2 network back-ends in the GNS3 VM or Linux only.

Both solution will allow packet captures and dynamic connect/disconnect.

As for Qemu/KVM I agree, it will become the workhorse of GNS3 and we have started putting more focus on it.

grossmj commented 8 years ago

There are multiple ways to implement this.

This is what has the less overhead and most logical but the big question that I haven't realized before: where to run uBridge if we have only one process per link? The controller would make sense on paper but we don't want all traffic to flow thru it, this would not be very efficient.

Another approach easier to implement is to have one uBridge per node:

This later method was the one I had in mind until now.

julien-duponchelle commented 8 years ago

I think the controller need to decide where ubridge will run like what we do for the capture on the link: https://github.com/GNS3/gns3-server/blob/2.0/gns3server/controller/udp_link.py#L114

grossmj commented 8 years ago

I was thinking the same. My only concern is that the API stays simple and logical (especially for external tools).

grossmj commented 8 years ago

Maybe we don't have to care too much since the tools are supposed to talk to the controller only, right?

julien-duponchelle commented 8 years ago

Yeah from outside it will be transparent.

You ask controller to create a link. And if you want to do something like rate limiting you ask the container to do it on this link id.

grossmj commented 8 years ago

So here is how I think it should basically work:

julien-duponchelle commented 8 years ago

Seem perfect for me

On Fri, Jun 17, 2016 at 12:18 AM Jeremy Grossmann notifications@github.com wrote:

So here is how I think it should basically work:

  • When a link is being created, the controller picks the node that will run uBridge. It will check if the node already has an uBridge. This means keeping a list in the controller of what nodes have allocated an uBridge.
  • The controller calls POST /network/ubridge with a node UUID in the data. The request replies with a uBridge UUID that can be used for other calls (for traffic controller for instance).
  • When a node is deleted, so does its ubridge instance.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/GNS3/gns3-server/issues/267#issuecomment-226630341, or mute the thread https://github.com/notifications/unsubscribe/AAVFXc9cQvItKwZtgpryeah10kMD460_ks5qMcu8gaJpZM4FY6YE .

grossmj commented 8 years ago

Let's sum it up:

Finally, just in case we face unexpected issues (most likely on Windows), we have the possibility to come back to the classic network connections by deactivating uBridge globally for nodes that don't absolutely need it.

julien-duponchelle commented 8 years ago

VMware also use ubridge

julien-duponchelle commented 8 years ago

I have mixed feeling about the current implementation and the fact that ubridge is inside the emulator code. It's work like in previous versions and work fine and it’s robust but I have the feeling it’s not futur proof.

The problem I see is it's make it more harder to replace ubridge by something else or to add features because it's deeply embed in each emulator code. And hard to optimize because we don't have a global vision of the network.

For example:

grossmj commented 8 years ago

I have mixed feeling about the current implementation and the fact that ubridge is inside the emulator code. It's work like in previous versions and work fine and it’s robust but I have the feeling it’s not futur proof.

Indeed this is not perfect but the problem is that every emulators are different, some like Docker and VMware uses existing interfaces (veth or vmnet) that we have to bridge to UDP tunnels. Others like Qemu and VPCS, we must pre-allocated local UDP tunnels for all their adapters, then we can use classic tunnels between the uBridge instances. How do we make something generic with so many differences?

The problem I see is it's make it more harder to replace ubridge by something else or to add features because it's deeply embed in each emulator code. And hard to optimize because we don't have a global vision of the network.

To be honest I don't think we will replace uBridge one day. Yes we could leverage Openvswitch, Linux bridges or something like that but this looks overly complicated, these tech are more oriented to be used by data centers and they have their own issues (see the patches done by unetlab to avoid Linux bridge to drop LLDP frames), not to tell that then you must connect your virtual switches together using VXLAN or GRE tunnels. I find it overkill for our use. On the other hand uBridge is simpler, we control it and it works everywhere.

actually we need to use two ubridge for a connection

I've tried to optimize but this creates issues. Currently, we have one uBridge instance per node, hard to share because one node can be connected to many other nodes. We could say one uBridge = one link, but this is not efficient.

actually add a feature like rate limiting require to modify code for each emulators even if behind the scene they use ubridge

Yes, I this is painful but how often do we add new emulators? Ideally we should have a common handler for packet capture, rate limiting etc.

it’s not easy to mix implementation like openvswitch for internal communication and UDP for compute to compute communications.

We should just stick to one strategy, UDP tunnels served us well and will continue to do.

we can't optimize to use a single ubridge process by project/compute on a compute node when we are more confident in ubridge ability to not crash

uBridge uses threads to connect NIOs together. I don't think it would be wise to have all connections handled by uBridge for an entire project or compute.

julien-duponchelle commented 8 years ago

I have mixed feeling about the current implementation and the fact that ubridge is inside the emulator code. It's work like in previous versions and work fine and it’s robust but I have the feeling it’s not futur proof.` Indeed this is not perfect but the problem is that every emulators are different, some like Docker and VMware uses existing interfaces (veth or vmnet) that we have to bridge to UDP tunnels. Others like Qemu and VPCS, we must pre-allocated local UDP tunnels for all their adapters, then we can use classic tunnels between the uBridge instances. How do we make something generic with so many differences?

You are right in the case of Docker and VMware we have no choice. And due to the fact we have special cases we can't make it generic

The problem I see is it's make it more harder to replace ubridge by something else or to add features because it's deeply embed in each emulator code. And hard to optimize because we don't have a global vision of the network. To be honest I don't think we will replace uBridge one day. Yes we could leverage Openvswitch, Linux bridges or something like that but this looks overly complicated, these tech are more oriented to be used by data centers and they have their own issues (see the patches done by unetlab to avoid Linux bridge to drop LLDP frames), not to tell that then you must connect your virtual switches together using VXLAN or GRE tunnels. I find it overkill for our use. On the other hand uBridge is simpler, we control it and it works everywhere.

actually we need to use two ubridge for a connection I've tried to optimize but this creates issues. Currently, we have one uBridge instance per node, hard to share because one node can be connected to many other nodes. We could say one uBridge = one link, but this is not efficient.

Yep

actually add a feature like rate limiting require to modify code for each emulators even if behind the scene they use ubridge Yes, I this is painful but how often do we add new emulators? Ideally we should have a common handler for packet capture, rate limiting etc.

I think more about adding new features. A possibility could be to return a ubridge id when we allocate the nio and to use this id to send ubridge command from the controller. With that we could avoid most boilerplate.

it’s not easy to mix implementation like openvswitch for internal communication and UDP for compute to compute communications. We should just stick to one strategy, UDP tunnels served us well and will continue to do.

Ok ;)

we can't optimize to use a single ubridge process by project/compute on a compute node when we are more confident in ubridge ability to not crash uBridge uses threads to connect NIOs together. I don't think it would be wise to have all connections handled by uBridge for an entire project or compute.

Ok