Closed ChangSurrey closed 5 years ago
Hi,
It actually looks like a dpdk error rather than an ofp error. What version of odp-dpdk are you using? Did you try to run any of the dpdk examples using the Mellanox NICs interfaces?
Best regards, Valentin Radulescu
From: ChangSurrey notifications@github.com Sent: Tuesday, January 15, 2019 5:15 PM To: OpenFastPath/ofp ofp@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: [OpenFastPath/ofp] Unable to start fpm with Mellanox Cx-4 40Gb QSFP NIC (#220)
Hello,
I have a 2-machine setup which I use to do performance benchmarking. I'm running OFP over ODP-DPDK on the DUT, and I'm using TRex load generator on the other machine. Both machines have a Mellanox ConnectX-4 40Gb QSFP NIC and an Intel X550 10Gb NIC.
The DUT is running CentOS 7.6.1810 with kernel 3.10.0-957.1.3.el7.x86_64.
OFED_LINUX-4.5-1.0.1.0 is installed.
The NIC ports can be bound to DPDK without any problem:
Network devices using DPDK-compatible driver
====================================================
0000:05:00.0 'MT27700 Family [ConnectX-4] 1013' drv=igb_uio unused=mlx5_core
0000:05:00.1 'MT27700 Family [ConnectX-4] 1013' drv=igb_uio unused=mlx5_core
I'm unable to start the fpm (or any other) application if I use the Mellanox NICs.
[root@localhost fpm]# ./fpm -i 0,1 -c 3 -p
odp_init.c:132:odp_init_dpdk():arg[0]: odpdpdk
odp_init.c:132:odp_init_dpdk():arg[1]: -c
odp_init.c:132:odp_init_dpdk():arg[2]: 0x1
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:1563 net_ixgbe
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:1563 net_ixgbe
odp_init.c:145:odp_init_dpdk():rte_eal_init OK
../linux-generic/odp_system_info.c:102:default_huge_page_size():defaut hp size is 2048 kB
../linux-generic/odp_system_info.c:102:default_huge_page_size():defaut hp size is 2048 kB
odp_pool.c:92:odp_pool_init_global():
Pool init global
odp_pool.c:93:odp_pool_init_global(): odp_buffer_hdr_t size 192
odp_pool.c:94:odp_pool_init_global(): odp_packet_hdr_t size 448
odp_pool.c:95:odp_pool_init_global():
odp_queue_basic.c:132:queue_init_global():Starts...
Queue config:
queue_basic.max_queue_size: 8192
queue_basic.default_queue_size: 4096
../linux-generic/odp_queue_lf.c:315:queue_lf_init_global():
Lock-free queue init
../linux-generic/odp_queue_lf.c:316:queue_lf_init_global(): u128 lock-free: 1
odp_queue_basic.c:174:queue_init_global():... done.
odp_queue_basic.c:175:queue_init_global(): queue_entry_t size 256
odp_queue_basic.c:176:queue_init_global(): max num queues 960
odp_queue_basic.c:177:queue_init_global(): max queue size 8191
odp_queue_basic.c:178:queue_init_global(): max num lockfree 128
odp_queue_basic.c:179:queue_init_global(): max lockfree size 32
Using scheduler 'basic'
../linux-generic/odp_schedule_basic.c:298:schedule_init_global():Schedule init ... Scheduler config:
sched_basic.prio_spread: 4
../linux-generic/odp_schedule_basic.c:358:schedule_init_global():done
PKTIO: initialized loop interface.
PKTIO: initialized null interface.
No crypto devices available
odp_pool.c:384:odp_pool_create():type: buffer name: ipsec_status_pool num: 1024 size: 256 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 448
odp_pool.c:469:odp_pool_create():cache_size 512
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/448/64, total pool size: 589824
odp_thread.c:165:odp_thread_init_local():There is a thread already running on core 0
ODP system info
ODP API version: 1.19.0
CPU model: Intel(R) Core(TM) i5-6500 CPU
CPU freq (hz): 1565234000
Cache line size: 64
Core count: 4
Running ODP appl: "fpm"
IF-count: 2
Using IFs: 0 1
E 0 0:2465063296 ofp_init.c:203] (null)(0): file I/O error
Num worker threads: 2
first CPU: 2
cpu mask: 0xC
odp_pool.c:434:odp_pool_create():type: tmo name: TimeoutPool num: 10000
odp_pool.c:453:odp_pool_create():Metadata size: 256, mb_size 256
odp_pool.c:469:odp_pool_create():cache_size 500
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/256/0, total pool size: 3200000
odp_pool.c:384:odp_pool_create():type: buffer name: TimeoutBufferPool num: 10000 size: 296 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 488
odp_pool.c:469:odp_pool_create():cache_size 500
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/488/24, total pool size: 5760000
odp_pool.c:427:odp_pool_create():type: packet, name: packet_pool, num: 10240, len: 1856, blk_size: 2176, uarea_size 12, hdr_size 460
odp_pool.c:453:odp_pool_create():Metadata size: 512, mb_size 2688
odp_pool.c:469:odp_pool_create():cache_size 512
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/2688/0, total pool size: 28180480
I 588915436 0:2465063296 ofp_uma.c:45] Creating pool 'udp_inpcb', nitems=1024 size=904 total=925696
odp_pool.c:384:odp_pool_create():type: buffer name: udp_inpcb num: 1024 size: 904 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 1096
odp_pool.c:469:odp_pool_create():cache_size 512
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/1096/56, total pool size: 1245184
I 589972697 0:2465063296 ofp_uma.c:45] Creating pool 'tcp_inpcb', nitems=2048 size=904 total=1851392
odp_pool.c:384:odp_pool_create():type: buffer name: tcp_inpcb num: 2048 size: 904 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 1096
odp_pool.c:469:odp_pool_create():cache_size 512
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/1096/56, total pool size: 2490368
I 591413119 0:2465063296 ofp_uma.c:45] Creating pool 'tcpcb', nitems=2048 size=784 total=1605632
odp_pool.c:384:odp_pool_create():type: buffer name: tcpcb num: 2048 size: 784 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 976
odp_pool.c:469:odp_pool_create():cache_size 512
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/976/48, total pool size: 2228224
I 592837725 0:2465063296 ofp_uma.c:45] Creating pool 'tcptw', nitems=409 size=80 total=32720
odp_pool.c:384:odp_pool_create():type: buffer name: tcptw num: 409 size: 80 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 272
odp_pool.c:469:odp_pool_create():cache_size 0
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/272/112, total pool size: 183232
I 593127528 0:2465063296 ofp_uma.c:45] Creating pool 'syncache', nitems=30720 size=168 total=5160960
odp_pool.c:384:odp_pool_create():type: buffer name: syncache num: 30720 size: 168 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 360
odp_pool.c:469:odp_pool_create():cache_size 512
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/360/24, total pool size: 13762560
I 600545605 0:2465063296 ofp_uma.c:45] Creating pool 'tcpreass', nitems=320 size=48 total=15360
odp_pool.c:384:odp_pool_create():type: buffer name: tcpreass num: 320 size: 48 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 240
odp_pool.c:469:odp_pool_create():cache_size 160
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/240/16, total pool size: 102400
I 601158718 0:2465063296 ofp_uma.c:45] Creating pool 'sackhole', nitems=65536 size=40 total=2621440
odp_pool.c:384:odp_pool_create():type: buffer name: sackhole num: 65536 size: 40 align: 0
odp_pool.c:453:odp_pool_create():Metadata size: 192, mb_size 232
odp_pool.c:469:odp_pool_create():cache_size 512
odp_pool.c:513:odp_pool_create():Header/element/trailer size: 64/232/24, total pool size: 20971520
odp_crypto.c:556:odp_crypto_capability():No crypto devices available
E 615305475 0:2465063296 ofp_ipsec.c:180] odp_ipsec_capability failed
E 615324270 0:2465063296 ofp_ipsec.c:181] Setting maximum number of IPsec SAs to zero
I 615339841 0:2465063296 ofp_ipsec.c:187] IPsec not supported with SP. Disabling IPsec.
I 615365375 0:2465063296 ofp_init.c:434] Slow path threads on core 0
odp_packet_dpdk.c:282:dpdk_init_capability():No driver found for interface: 0
odp_packet_dpdk.c:390:setup_pkt_dpdk():Failed to initialize capabilities for interface: 0
../linux-generic/odp_packet_io.c:233:setup_pktio_entry():Unable to init any I/O type.
../linux-generic/odp_packet_io.c:295:odp_pktio_open():interface: 0, driver: bad handle
E 615581924 0:2465063296 ofp_ifnet.c:23] odp_pktio_open failed
Error: OFP global init failed.
However, if I unbind the Mellanox ports and bind the 2 Intel ports to DPDK, and run the same command to start fpm, it works. I can do IP forwarding at DUT without any problem.
Any suggestion/idea is appreciated!
Chang
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://url10.mailanyone.net/v1/?m=1gjQQr-00092N-6O&i=57e1b682&c=v2car8opjDg2aW7EPHAbu8i2r9aYR8zBZ4oGlpsNy2r8srCcPzecwp7Cc7HhTfjoUUgAD1OMMCkO3MjR39e5j3WZtotIC3USnVKqeodHlNBxU6-mN3vG0AO3GeRfv87n4lPyFSpO-7Qbxi2E5JRyowfZ1NH8bTJJqzPmxOj1e2IYBhrZeoMvvkBJrjE1g133NqSODzNHUv1NcTKO1atcWXD5cGAJdvqDY3yxX6EXKt4csyXM9o03SRXv0s2IpOGL, or mute the threadhttps://url10.mailanyone.net/v1/?m=1gjQQr-00092N-6O&i=57e1b682&c=4mplH9X6CmukkvU1ozd94-fEIg1Wxcl9zfWtk_XISCyBZJ1VJ2Pzl02h79noCbjGzSqoG00Blg4NlEiGs4B_uNyOovkHwSRbpiwoQfFCPOPTlTYpz3OigI-UMZ5gUUQvy4BnIAAX8XBYbAXPqRnROQ2Tz61DQG_y9eyvUvttxxpoZBMoCyjV1hI0OyFWtua84-XsR3iA7GikE-7HR_u-8empacPZmL5-7PTQ6lVanRX4VA9UTo9qEurra6gTEGFrAY63GFYYBEJZAUjUlviipwjsHMBWEVkkqsuXqFhxVpF6kq9henBkEJtipW9LvWg3T4dF_a9YTFrEIT-dSECGcQ.
This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.
@RadulescuValentin Thanks Valentin, yes following your comment I have realised too that this is an ODP-DPDK error.
I have found the solution here https://lists.linaro.org/pipermail/lng-odp-dpdk/2017-November/002053.html
Basically CONFIG_RTE_LIBRTE_MLX5_PMD=y
needs to be set when configuring ODP-DPDK before compiling it. Because I used ./devbuild_ofp_odp_dpdk.sh to auto-install everything, this flag was not set, which means improper installation of MLX5 PMD.
Hello,
I have a 2-machine setup which I use to do performance benchmarking. I'm running OFP over ODP-DPDK on the DUT, and I'm using TRex load generator on the other machine. Both machines have a Mellanox ConnectX-4 40Gb QSFP NIC and an Intel X550 10Gb NIC.
The DUT is running CentOS 7.6.1810 with kernel 3.10.0-957.1.3.el7.x86_64.
OFED_LINUX-4.5-1.0.1.0 is installed.
The NIC ports can be bound to DPDK without any problem:
I'm unable to start the fpm (or any other) application if I use the Mellanox NICs.
However, if I unbind the Mellanox ports and bind the 2 Intel ports to DPDK, and run the same command to start fpm, it works. I can do IP forwarding at DUT without any problem.
Any suggestion/idea is appreciated!
Chang