GNS3 / gns3-gui

GNS3 Graphical Network Simulator
http://www.gns3.com
GNU General Public License v3.0
2.11k stars 434 forks source link

[153/2.0.0rc1] - Clouds hosted on GNS3 VM and Ubuntu do not work #1921

Open bpozdena opened 7 years ago

bpozdena commented 7 years ago

I have spent the past few days testing GNS3 clouds extensively. I have tested on multiple PCs, laptops, servers with different versions of GNS3 from 153 to 2.0.0rc1.

I have found that the clouds hosted on all GNS3 VMs (153-200rc1) and Ubuntu16.04 do not work. More specifically, they do work for small traffic such as ping and loading of simple websites. However, when I try to transfer a file using FTP, SMB, HTTP or any other protocol, the transfer speed never goes above 1kB/s. When I check the packet capture, I see a lot of re-transmissions and duplicate packets none of which actually makes it to the other side of the cloud. I also see a lot of jumbo packets despite the fact that MTU on all interfaces is set to standard 1500bytes. There must be some problem with the uBridge on Ubuntu systems.

The cloud only works properly when it is hosted on Windows with GNS3 153.

This was tested using a simple topology a shown below.

When using cloud hosted on GNS3 VM 153 or 200rc1: topology

Transfer speed = 0kB/s gns3 153 vm cloud

Packet capture: wireshark_remote_cloud

Here is the same test from cloud hosted Ubuntu 16.04.2 and GNS3 200rc1: Transfer speed = 0kB/s ubuntu_vm_cloud

Here is a file transfer made through a cloud hosted on Windows 10 with GNS3 153: Transfer speed = 20 to 70Mbps depending on the CPU resources given to the IOU and the VMs. The cloud can handle it with no problems. 153_local windows cloud


The GNS3 VMs were hosted even on a server with 16-core Xeon/64GB RAM. Tested with multiple interfaces including 10G, 1G Ethernet interfaces and VMware virtual interfaces and loopbacks. I am positive it is not a hardware issue.

Can anybody else test the actual throughput of clouds hosted on your Linux/GNS3 VMs?

julien-duponchelle commented 7 years ago

What is the result of ubridge -v ?

bpozdena commented 7 years ago

From GNS3 VM 200RC1: gns3@gns3vm:~$ ubridge -v ubridge version 0.9.11

bpozdena commented 7 years ago

From Ubuntu 16.04.2 (with GNS3 200RC1):

GNS3@ubuntu:~$ ubridge -v
ubridge version 0.9.11

From GNS3 VM 153:

gns3@gns3vm:~$ ubridge -v
ubridge version 0.9.9

They all have the same issue with file transfers on Ubuntu.

The cloud however works fine on Windows 10 with the same uBridge versions: Win 10 with GNS3 200RC1:

C:\Program Files\GNS3>ubridge.exe -v
ubridge version 0.9.11 

Win 10 with GNS3 153:

C:\Program Files\GNS3 - 153>ubridge.exe -v
ubridge version 0.9.9
julien-duponchelle commented 7 years ago

Are you using VLAN or it's plain ethernet?

On Mon, Mar 13, 2017 at 10:20 PM sairuscz notifications@github.com wrote:

From Ubuntu 16.04.2 (with GNS3 200RC1):

GNS3@ubuntu:~$ ubridge -v ubridge version 0.9.11

From GNS3 VM 153:

gns3@gns3vm:~$ ubridge -v ubridge version 0.9.9

They all have the same issue with file transfers on Ubuntu.

The cloud however works fine on Windows 10 with the same uBridge versions: Win 10 with GNS3 200RC1:

C:\Program Files\GNS3>ubridge.exe -v ubridge version 0.9.11

Win 10 with GNS3 153:

C:\Program Files\GNS3 - 153>ubridge.exe -v ubridge version 0.9.9

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/GNS3/gns3-gui/issues/1921#issuecomment-286247446, or mute the thread https://github.com/notifications/unsubscribe-auth/AAVFXZAkxMCTPgmSnLhoSjexk0ds9r2kks5rlbMZgaJpZM4MbDxQ .

bpozdena commented 7 years ago

This happens even on plain Ethernet. Even when I connect a Qemu VM directly to the cloud that is hosted on Ubuntu/GNS3VM. I have tested bridging the cloud to various physical interfaces and virtual interfaces (vmware).

The cloud works fine with plain Ethernet when it is hosted on Windows 10. However, it crashes when Dot1q packet larger than 1518bytes passes through it (#1867).

bpozdena commented 7 years ago

While the cloud hosted on Windows works OK and is even fairly fast it is not without issues. As you can see on screenshot below, all data during a file transfer come out of order and do not actually make it through the cloud. However, somehow all the dropped packets group into one large jumbo packet which does make it through. The packet size then varies from 4000bytes to 30,000bytes. The MTU on all interfaces is set to the default of 1500bytes so I am not sure how this is possible.

This packet capture of an FTP file transfer shows 50% of the packets have a TCP error. You can also see the combined jumbo packets. I cannot explain it, but there must be a problem with the Windows uBridge too. out-of-order

grossmj commented 7 years ago

I remember that using an Ethernet adapter directly on Linux can result in low throughput. I would try with a TAP interface to see if you have the same behavior.

There are basically 2 methods do to it.

Using a TAP interface directly

apt-get install uml-utilities bridge-utils
modprobe tun
tunctl -u <user>
ifconfig tap0 10.0.0.1 netmask 255.255.255.0 up
route add 11.0.0.0/8 netdev tap0
echo 1 > /proc/sys/net/ipv4/ip_forward

More info on http://jesin.tk/how-to-connect-gns3-to-the-internet/

Using a bridge

apt-get install uml-utilities bridge-utils
modprobe tun
tunctl -u <user>
ifconfig eth0 0.0.0.0 promisc up
ifconfig tap0 0.0.0.0 promisc up
brctl addbr br0
brctl addif br0 tap0
brctl addif br0 eth0
ifconfig br0 up
dhclient br0
brctl show br0

More info on http://myhomelab.blogspot.ca/2011/12/add-loopbacks-in-ubuntu-for-gns3.html

I prefer the later method, it is cleaner I think.

bpozdena commented 7 years ago

Impressive! I have added my physical interface to a bridge on the GNS3 VM and I am now getting 200Mbps transfer speed instead of 0.5kbps when the cloud is connected directly to the physical interface. And the speed was still limited by the Qemu VM CPU resources - I am sure it can get even faster.

Thank you very much Jeremy! I should now finally be able to disable the local GNS3 server and stick only to a main remote GNS3 server. I have yet to test if dot1q frames with size of 1518bytes can pass through it - I will try to report back tomorrow.

Do you think you would be able to add a new feature into the GUI that would allow creation of bridges on the GNS3 VM? Alternatively what would be the best way to have the bridges configured automatically after the GNS3 VM boots up?

Thanks again - you have moved the GNS3 limits much further!

bpozdena commented 7 years ago

I discovered it is enough just to add the interface to a bridge. There is no need for the tap. I am now getting a very high throughput. Unfortunately, trunk links still do not work as all frames larger than 1514bytes get dropped. The good thing is that the Ubuntu uBridge does not seem to crash like on Windows - the packets are just being dropped but the topology still runs after I lower MTU on all devices.

Example for GNS3 VM: backup and edit interface config file

sudo cp /etc/network/interfaces /etc/network/interfaces.backup1
sudo nano /etc/network/interfaces

1)change # MANUAL=0 to # MANUAL=1 to keep the settings after future GNS3 VM upgrades 2)Comment out all entries for interfaces you want to add to a bridge(example for eth2):

#allow-hotplug eth2
#iface eth2 inet dhcp

2)Create a bridge br2 and add eth2 to it by adding the below lines:

auto br2
iface br2 inet dhcp
bridge_ports eth2

3)Link the GNS3 cloud to br2 interface

I now have three clouds hosted on the GNS3 VM and they all are forward traffic to physical interfaces at high speed. Here is my scenario:

#allow-hotplug eth3
#iface eth3 inet dhcp

#allow-hotplug eth4
#iface eth4 inet dhcp

#allow-hotplug eth2
#iface eth2 inet dhcp

auto br2
iface br2 inet dhcp
bridge_ports eth2

auto br3
iface br3 inet dhcp
bridge_ports eth3

auto br4
iface br4 inet dhcp
bridge_ports eth4
grossmj commented 7 years ago

Do you get anything in the uBridge log? (to find the file, right click on the cloud in GNS3 and choose "Show In File Manager".

Thanks,

bpozdena commented 7 years ago

I am still running GNS3 2.0 RC1 on Windows 10 and the cloud is hosted on the GNS3-VM. When I follow your instructions, I am just shown the path to the logs located on the VM. When I access the VM via SCP, I see the log file there, but it is empty.

I have three clouds in my topology and all their logs are empty.

grossmj commented 6 years ago

@sairuscz do you still have something blocking you?

bpozdena commented 6 years ago

I am currently running GNS3 2.1 RC3 and the problem is still there. However, I have been using the bridged interfaces as a workaround because it allows for more than 50x faster file transfers.

grossmj commented 6 years ago

I am gonna go ahead and close this issue. The recommendation is to use bridged interfaces.

peeyusht commented 3 years ago

For the GNS3 v 2.2.24 based on Ubuntu release 20.04.3 LTS. Config file on Netplan is different than the older one. Below is step by step config that worked for me

Edit Netplan config file sudo nano /etc/netplan/90_gns3vm_static_netcfg.yaml

Below is config (change it according to your IP schema)

network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
#      addresses:
#        - 172.16.72.32/24
#      gateway4: 172.16.72.2
#      nameservers:
#          addresses: [172.16.72.9, 8.8.4.4]

  bridges:
    br0:
      interfaces: [eth0]
      addresses: [172.16.72.32/24]
      gateway4: 172.16.72.1
      mtu: 1500
      nameservers:
        addresses: [8.8.8.8]
      parameters:
        stp: true
        forward-delay: 4
      dhcp4: no
      dhcp6: no

To generate the new config sudo netplan generate

To apply the generate Netplan sudo netplan apply

Check the new config ip a

Configuration should have br0 with configured IP parameters

Above configuration worked for me and hope it will help others

grossmj commented 3 years ago

Sounds like they changed the syntax. Thanks for sharing, we will fix this soon 👍

albert-a commented 11 months ago

Thanks @bpozdena! Bridging helped me as well! I commented eth0 lines in /etc/network/interfaces and added:

auto br0
iface br0 inet dhcp
    bridge-ports eth0

Note that you have too choose bridge (br0 in my example) in the cloud object in GNS3 to achieve high throughput.

itsamemarkus commented 6 months ago

Hi, I stumbled upon this problem too, and my workaround was to use GNS3 NAT for internet and Cloud for ssh/https mgmt acces.

Can someone please explain why eth0 behaves that way? I'm curious