Open guesslin opened 3 years ago
It looks like you're using iperf
, and thus the regular network stack, rather than netmap
It looks like you're using
iperf
, and thus the regular network stack, rather than netmap
It seems to me that in our case we always read/write through netmap stack.
We have three VMs in the ESXi environment. An iperf client, an iperf server, and a host running our netmap to handle the packet switching between those two iperf client/server.
+---------------+ +------------------+ +--------------+
| | | | | |
| iperf client | <-------------> | netmap program | <--------------> | iperf server |
| | | | | |
+---------------+ +------------------+ +--------------+
We were expecting the vmxnet3 specific driver changing can give us some performance improvement, but we don't get the improvement.
II understand that you expect netmap to speed up things, however, in a virtual machine environment, what it could speed up, is the amount of bandwidth multiple virtual machines would need or are using, I don't believe it would improve the virtual machines by itself, as they still rely on the default linux stack and then thus limited by what the kernel can do or not. So having "netmap in the way" wouldn't change this in any way, lets say.
@Fr3DBr Thanks for pointing that out.
So if we really want to speed up the speed in that virtual machine environment by netmap. We need to install netmap on all the VMs and the host to get rid of the default Linux stack (maybe ptnetmap
on the host is needed?). Then once everything doesn't rely on the default Linux stack we can gain more speed improvement.
We had tested the network throughput without our netmap-program the number is around 9 Gbits/s. So it seems to me that's not a bottleneck from the host Linux default stack. (or at least it's not the biggest problem). And we did the same test with DPDK on AWS ena-enable environment. We got also 9 Gbits/s (raw speed was around 15 Gbits/s).
+---------------------------- (9 Gbits/s) ----------------------------------+
| |
| v
+---------------+ +------------------+ +--------------+
| | | | | |
| iperf client | <-------------> | netmap program | <--------------> | iperf server |
| | | | | |
+---------------+ +------------------+ +--------------+
| ^ | ^
| | | |
+----------------------------------+ +-----------------------------------+
(Only 2.3 Gbits/s)
That's why we are feeling there's something wrong with how we use the netmap patched vmxnet3 driver.
As the previous poster said, you need netmap aware applications to achieve a performance improvement.
@guesslin Yes, in order to achieve full performance all your applications (including iPerf) must be netmap aware, when using vmxnet3 driver, basically it "allows" you as a developer, to improve the virtual machine communication by coding your "direct nic paths" yourself, for every application you're interested in doing so, however "it does not accelerate the driver or anything like this".
@Fr3DBr Understood, in case we don’t want to modify the iperf client or server, means we continue using the Linux default stack, are there any alternatives of speeding up this test case? After all, iperf client to iperf server gets a limit of 9 GBit/s and they are also using the Linux default stack.
@guesslin If you are not going to modify anything, then you'll not benefit from netmap at all, afterall, this is basically a packet acceleration framework to be used within specific drivers and specific applications tailored for it.
Hello,
I'm running the latest netmap 7972d8f4ca28689ef544dd3024be4a8d416b8cb5 on ESXi 6.7. The throughput test result is not as expected with the netmap-generic driver, so we try to compile the netmap-patched-vmxnet3 driver to see if we can gain more throughput.
Using netmap generic with original vmxnet3 driver
Using netmap loaded vmxnet3 driver
As you can see there is no big difference between the two throughput tests. (2.34 Gbits/sec vs 2.22 Gbits/sec)
I was thinking maybe I didn't load the patched
vmxnet3
correctly, so I just replaced the original one. And make a check the modinfo is correct.But the throughput test is still no difference.
Is there anything I did wrong for using the
vmxnet3
driver?