Open ppnaik1890 opened 5 years ago
Hi @ppnaik1890,
We have not enabled mTCP to work for netmap's vale interface yet. I have not looked at netmap vale interface in a long time. If you are interested in using mTCP with vale, we can work together to get its corresponding patch done. Can you please share the command line arguments to run packet-generator with vale interface?
I think we only need to change 2 files to get netmap vale working (conifg.c
and netmap_module.c
).
Hi @ajamshed ,
Thanks for the response. mTCP worked over a single core VM with a single queue vale interface. However, I feel mTCP did not work over multiqueue vale interface because the hash that vale uses is different from hash that mTCP wants. You have mentioned in your readme that the hash on the drivers need to be changed. But for a vale interface, the hash in vale needs to be updated. So would it require changes in mTCP code or vale code? Also, let me know if this can be achieved by changing the netmap_module.c in mTCP itself.
Our setup is as follows: We have two servers. The servers are peer to peer connected. One server is our mTCP+ netmap epget client. On the other server we have vale on host and a VM with vale interface. Our mTCP+ netmap epserver runs inside the VM. The physical interface on this server is added as an interface to vale. So the flow is: phy-nic (server 1)-> phy-nic(server 2)-> vale -> interface of VM
The pkt-gen commands are:
on server1 side:
sudo pkt-gen -i enp1s0f1 -f tx -n 500111222 -l 60 -w 5 -s 192.168.100.16:5000 -d 192.168.100.31:5000
inside the VM on server 2:
sudo pkt-gen -i ens8 -f rx -s 192.168.100.16:5000 -d 192.168.100.31:5000
@ppnaik1890,
To enable symmetric RSS, you will have to update the driver code. As I mention in the README.netmap
file, the RSS seed gets updated in one of ixgbe
or i40e
driver source files. I have not used netmap's vale
interface for some time. Can you first check whether pkt-gen
is able to read traffic from individual NIC queues within the VM? Something like:
sudo pkt-gen -i enp1s0f1 -f tx -n 500111222 -l 60 -w 5
sudo pkt-gen -i ens8-0 -f rx
sudo pkt-gen -i ens8-1 -f rx
sudo pkt-gen -i ens8-2 -f rx
sudo pkt-gen -i ens8-3 -f rx
I am assuming that the vale-bound ens8 has 4 NIC queues in the example above.
You may want to ask the netmap authors whether one can enable multi-queue-based vale interfaces first. We can then proceed with setting up multi-threaded mTCP applications within the VM.
Just FYI, with physical netmap-based interfaces, we normally bind a process with an individual NIC queue by calling nm_open("iface-qno", &base_nmd, 0, NULL);
function call. For example, see: https://github.com/mtcp-stack/mtcp/blob/1ad1b1a386ad2e17b671c000d08eb1296a94be95/mtcp/src/netmap_module.c#L74-L90.
Hi @ajamshed, I had got this working on an older version of netmap and mTCP. I had added the mtcp RSS in the vale code then. However, the code seems to have changed a lot since then and I am unable to figure out how to get this done. This is one setup that we are trying to setup. There is another one too where we are stuck. I would open another issue for it. Thanks, Priyanka
I think a better way to debug this issue will be to try mTCP netmap version out with an older commit of netmap (the one which works for mTCP). And then iteratively try out with later commits till you find the exact commit ID from where mTCP stops working. If you are able to narrow down the exact netmap commit ID, we can together determine the cause of the underlying issue then. Please let me know once you have narrowed down the problem. I will be available to help out. Thanks.
Hi,
We are running mTCP inside VM. The interface to the VM is provided through vale switch. We could run the single core mTCP. However, when we added multiple queues inside vale interface, netmap pkt-gen was able to identify the queues but mTCP is not able to identify. Do we need to change the RSS seed for this. If so, can you please help us where should we make the changes so that the multiple queues inside a vale interface are visible to mTCP inside the VM. Our netmap API version is 13. And the kernel version on host as well as inside guest is 4.15.0-29-generic.
Thanks a lot.