Open chrhong opened 3 years ago
@magnus-karlsson may have an idea?
Could you please share the vdev args you are using for the DPDK PMD? If you are using the custom program 'xdp_prog' arg, you should try patch DPDK with: http://patches.dpdk.org/project/dpdk/patch/20211022104253.31999-1-ciara.loftus@intel.com/ Could you please share your kernel, libbpf and DPDK versions too?
Thanks, Ciara
Ciara,thanks for your suggestion, I will try to use the patch.
My test env: DPDK: stable 19.11.6 libbpf: git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git OS:Centos 7.9 + kernel 5.4.155-1.el7.elrepo.x86_64
Test log:
[root@gc bin]$ ./testpmd -l 1,2,3 -n 4 --log-level=pmd.net.af_xdp:info --no-pci --vdev net_af_xdp,iface=ens12,start_queue=0,queue_count=3 -- -i --rxq=3 --txq=3
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp
init_internals(): Zero copy between umem and mbuf enabled.
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
libbpf: can't get next link: Invalid argument
eth_rx_queue_setup(): Set up rx queue, rx queue id: 1, xsk queue id: 1
libbpf: can't get next link: Invalid argument
eth_rx_queue_setup(): Set up rx queue, rx queue id: 2, xsk queue id: 2
libbpf: can't get next link: Invalid argument
Port 0: FA:0C:B1:0F:DF:01
You're welcome. You are not using the 'xdp_prog' argument so the patch should not be necessary.
I believe I was able to reproduce the error. Is it correct that you can still initialize the PMD and rx/tx traffic? It's just the warning log that is the concern?
Since v0.6.0 libbpf tries to probe the kernel for 'bpf link' support. The kernel you are using does not have this support which is the reason the warning log is generated. It's nothing to worry about though, because libbpf defaults back to legacy behavior if the support is not detected. So essentially you can ignore the log, or upgrade the kernel if you want to avail of bpf link support. This commit message describes the benefits: https://github.com/libbpf/libbpf/commit/8628610c322a
Yes, while I test it in icmoecho mode, I could get the rx/tx traffic count increased:
./testpmd -l 1,2 -n 1 --log-level=pmd.net.af_xdp:info --vdev net_af_xdp0,iface=ens12 --vdev net_af_xdp1,iface=ens13 --no-pci -- -i --forward-mode=icmpecho
...
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 424 RX-dropped: 0 RX-total: 424
TX-packets: 407 TX-dropped: 0 TX-total: 407
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 379 RX-dropped: 0 RX-total: 379
TX-packets: 379 TX-dropped: 0 TX-total: 379
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 803 RX-dropped: 0 RX-total: 803
TX-packets: 786 TX-dropped: 0 TX-total: 786
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I will test it will my dpdk app then. Thanks again! :) BTW, which kernel version is required to support 'bpf link' ?
The observability APIs were added here https://github.com/torvalds/linux/commit/1f427a8077996f8aaefbc99e40ff3068ee627d8d which was kernel v5.8 I think.
I am using AF_XDP which is integrated into DPDK, but I met this libbpf error while setup af_xdp device:
Which setting would cause such issue ? could you give some suggestions ? Thanks.