emmericp / MoonGen

MoonGen is a fully scriptable high-speed packet generator built on DPDK and LuaJIT. It can saturate a 10 Gbit/s connection with 64 byte packets on a single CPU core while executing user-provided Lua scripts for each packet. Multi-core support allows for even higher rates. It also features precise and accurate timestamping and rate control.
MIT License
1.04k stars 234 forks source link

MoonGen does not send packets #247

Closed ghost closed 5 years ago

ghost commented 5 years ago

Hi,

I have one Intel X540-AT2 i am trying to use with MoonGen. In detail, I am trying to send packets to another interface of another machine it is connected to. I just need to send packets using MoonGen. I have set up the following flow in flows/examples.lua

Flow{"udp-simple-trial", Packet.Udp{ ethSrc = txQueue(), ethDst = mac"a0:36:9f:cf:48:0c", ip4Src = ip"192.168.87.10", ip4Dst = ip"192.168.87.11", udpSrc = 5001, udpDst = 5201, pktLength = 100 }, timestamp = false }

I have binded the interface to dpdk driver, so that the output of dpdk-devbind.py is

Network devices using DPDK-compatible driver 0000:83:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' drv=igb_uio unused=

However, everytime I try to run moongen-simple I does not send any packets.

$ sudo ./moongen-simple start udp-simple-trial:0::rate=10 [INFO] Initializing DPDK. This will take a few seconds... EAL: Detected 48 lcore(s) EAL: No free hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: PCI device 0000:83:00.0 on NUMA socket 1 EAL: probe driver: 8086:1528 net_ixgbe EAL: PCI device 0000:83:00.1 on NUMA socket 1 EAL: probe driver: 8086:1528 net_ixgbe [INFO] Found 1 usable devices: Device 0: A0:36:9F:CF:4D:D4 (Intel Corporation Ethernet Controller 10-Gigabit X540-AT2) [INFO] Flow udp-simple-trial => 0x1 PMD: ixgbe_dev_link_status_print(): Port 0: Link Down [INFO] Waiting for devices to come up... [INFO] Device 0 (A0:36:9F:CF:4D:D4) is up: 10000 MBit/s [INFO] 1 device is up. [Device: id=0] TX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] TX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] TX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] TX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] TX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] TX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] TX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] TX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing)

Any guess what might be happening? Examples in README.md did not work either

emmericp commented 5 years ago

Does it work with other DPDK applications?

ghost commented 5 years ago

yes, I have tried some of the example applications from dpdk and it works fine

ghost commented 5 years ago

I have found the following on the configuration I have.

$ lspci -nn | grep Eth 83:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01) 83:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01)

They are assigned on the NUMA node 1 $cat /sys/bus/pci/devices/0000\:83\:00.0/numa_node 1

$ cat /sys/bus/pci/devices/0000\:83\:00.1/numa_node 1

I have the following NUMA nodes $ numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 24 25 26 27 28 29 30 31 32 33 34 35 node 0 size: 80498 MB node 0 free: 2111 MB node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23 36 37 38 39 40 41 42 43 44 45 46 47 node 1 size: 80636 MB node 1 free: 128 MB node distances: node 0 1 0: 10 21 1: 21 10

My GRUB configuration has

... GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on isolcpus=2-11,14-23,26-35,38-47 default_hugepagesz=1G hugepagesz=1G hugepages=4" ..

Should I configure dpdk-conf.lua to run only on the cores of numa node 1?

ghost commented 5 years ago

As explained here https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html

If the grub option intel_iommu is on, then you have to add the option iommu=pt. Now my grub conf is

GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt isolcpus=2-11,14-23,26-35,38-47 default_hugepagesz=1G hugepagesz=1G hugepages=4"

That solved the problem