opnsense / src

OPNsense operating system on top of FreeBSD
https://opnsense.org/
Other
359 stars 151 forks source link

VIRTIO 10gbe to slow #163

Closed pmoser1976 closed 6 months ago

pmoser1976 commented 2 years ago

Important notices

Before you add a new report, we ask you kindly to acknowledge the following:

Describe the bug

All Interfaces are virtualized using VIRTIO (also tested VIRTIO-NET and E1000). In Web-UI they are shown as "10Gbase-T ", which is correct.

When copying files I only get 1 gbit/sec speed, testing transferspeed with iperf3 shows 850 Mbits/sec.

To Reproduce

Steps to reproduce the behavior:

  1. Create new VM on Unraid
  2. set Network Interfaces to VIRTIO
  3. install opnsense using iso image
  4. install iperf3 via system:firmware
  5. start iperf3 server on Host (Unraid)
  6. start iperf3 -c IP-Host
  7. see very poor transfer-speed (< 1gbit/s)

Expected behavior

Other VMs on the same Host (ubuntu, freebsd13) are running successfully with 10gbit speed (iperf3 shows 18 Gbits/sec). OPNsense is much slower. Transferspeed should be as fast as on other VMs (10gbit/s, iperf3 shows 18gbit/s)

Describe alternatives you considered

-

Screenshots

-

Relevant log files

-

Additional context

Settings in OPNsense: CRC, TSO, LRO are deactivated. Tested on fresh install without Shaper.

Environment

OPNsense 22.1.7_1 (amd64, OpenSSL) as VM on Unraid as host Intel® Xeon® CPU E3-1230 V2 @ 3.30GH virtual network interfaces using VIRTIO

ipha commented 2 years ago

I've experienced the same thing running OPNsense 22.7.2 under libvirt/qemu. Turning on CRC offload helps, but only in single threaded transfers.

CRC, TSO, LRO all off, iperf3 single thread:

CRC, TSO, LRO all off, iperf3 -bidir mode:

CRC, TSO, LRO all on, iperf3 single thread:

CRC, TSO, LRO all on, iperf3 --bidir mode:

karelkryda commented 10 months ago

Any news? I have installed OPNsense version 23.7.9 on Proxmox VE 8.1.3, 10Gb network card Mellanox ConnectX-3 passed to OPNsense VM using VirtIO and Linux bridge. Tests using iperf3 between VLANs achieve speeds of around 600Mb/s. Testing using iperf3 directly between an OPNsense VM and any other VM on the same Linux bridge interface behaves the same way. All 3 HW offloading options are disabled, as is VLAN Hardware Filtering. I tried to replicate this behavior on a clean installation of FreeBSD version 13. If I didn't adjust any settings, I was getting speeds around 10Gb/s. In case I disabled HW offloading, the speed dropped to values around 1-3Gb/s. In case I set the hw.ibrs_disable entry in OPNsense to 1, the speed went from 600Mb/s to some 1.5Gb/s. According to this I judge that the problem will be with HW offloading disabled. I tried to turn HW offloading on and suddenly I got to speeds around 9Gb/s when testing the connection to OPNsense, but the test between the 2 VLANs reached 0Kb/s and I also lost access to the administration servers in the second VLAN. I am currently going to try PCIe passthrough and see if I get any improvement.

The results were the same when changing the number of processors in the VM (cores and sockets), using multiqueue on the network interface, changing the BIOS to UEFI and changing the CPU types didn't help either. Tested on CPU E5-2650 and E5-2603 v3.

Tuning in OPNsense also did not yield results. Tested for example this blog.

EDIT: Additional info about 10Gb/s speeds with VirtIO and HW offloading turned on In order to reach 10 Gb/s, I need to enable the first and third HW offloading options. I also modified the already mentioned hw.ibrs_disable option to 1. I also set the MTU to 9000, but this is not necessary. image At the moment the test using iperf3 between OPNsense VM and Debian 12 VM show a nice 10Gb/s. image

karelkryda commented 10 months ago

I have tested the PCI passthrough and would like to share the results with you. I tried in Proxmox using PCI passthrough to pass the card directly to the VM. The resulting speeds were still very low, but seemed to be much higher in the basic setup than using VirtIO. Turning on HW offloading didn't help with PCI passthrough either, but unlike turning on HW offloading with VirtIO, it didn't cause me any problems accessing devices on the network. I also tried the tunnables mentioned above, again without success. I also tried switching hw.ibrs_disable to 1 and in the final I was able to achieve speeds of around 4Gbps with iperf3 on a single thread. In case I used the -P switch and set it to 12 for example, the speeds were correct, i.e. in the range of 9-10Gb/s. Next, I tried increasing the MTU from 1500 to 9000 and with this setting, even with a single thread, I got speeds of around 9.5Gb/s. Increasing the MTU to 9000 on other devices on the network as well (such as TrueNAS Core) allowed me to communicate with them at around 10Gb/s. I.e. the solution that allowed me to take full advantage of 10Gb was:

My next question was how VirtIO would behave with hw.ibrs_disable set to 1 and MTU set to 9000. Testing this behavior yielded maximum speeds of around 5Gb/s. From these findings, it can be said that (at least for me) I achieve at least half the speeds with VirtIO than with the same setup with PCI passthrough.

fichtner commented 6 months ago

Closing old support issue. If someone wants to step in feel free.