Open rohitjo opened 4 years ago
cat /proc/meminfo | grep Huge | AnonHugePages: 0 kB | ShmemHugePages: 0 kB | FileHugePages: 0 kB | HugePages_Total: 8192 | HugePages_Free: 8180 | HugePages_Rsvd: 0 | HugePages_Surp: 0 | Hugepagesize: 2048 kB | Hugetlb: 16777216 kB
I met the same problem today. ./client wait 10.0.0.1 1234 100 then exit which same error. Failed in tcp_send_buffer.c:42 sbm->mp = (mem_pool_t)MPCreate(pool_name, chunk_size, (uint64_t)chunk_size * cnum)
root@ckun-MS-1:~/github/mtcp/apps/perf# ./client wait 10.0.0.1 1234 100
[DEBUG] Wait mode
Configuration updated by mtcp_setconf().
[DEBUG] Initializing mtcp...
Loading mtcp configuration from : client.conf
Loading interface setting
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:158b net_i40e
PMD: Global register is changed during support QinQ parser
PMD: Global register is changed during configure hash input set
PMD: Global register is changed during configure fdir mask
PMD: Global register is changed during configure hash mask
PMD: Global register is changed during support QinQ cloud filter
PMD: Global register is changed during disable FDIR flexible payload
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:158b net_i40e
EAL: Auto-detected process type: PRIMARY
Configurations:
Number of CPU cores available: 1
Number of CPU cores to use: 1
Maximum number of concurrency per core: 2
Maximum number of preallocated buffers per core: 2
Receive buffer size: 6291456
Send buffer size: 4194304
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics:
"---------------------------------------------------------------------------------
Interfaces:
Number of NIC queues: 1
---------------------------------------------------------------------------------
Loading routing configurations from : config/route.conf
fopen: No such file or directory
Skip loading static routing table
Routes:
(blank)
---------------------------------------------------------------------------------
Loading ARP table from : config/arp.conf
fopen: No such file or directory
Skip loading static ARP table
ARP Table:
(blank)
---------------------------------------------------------------------------------
Checking link statusdone
Configuration updated by mtcp_setconf().
[DEBUG] Creating thread context...
[ MPCreate: 173] Can't allocate memory for mempool!
root@ckun-MS-1:~/github/mtcp/apps/perf#
root@ckun-MS-1:~/github/mtcp/apps/perf# cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 16384
HugePages_Free: 16370
HugePages_Rsvd:
HugePages_Surp: 0
located the failing in
rte_mempool.c:1066
if (rte_mempool_populate_default(mp) < 0) goto fail;
located in memzone_reserve_aligned_thread_unsafe rte_errno == EINVAL
So far I didn't get the root cause , maybe some config is needed. anyone can help?
Actually I don't know how to use gdb to debug the functions in dpdk step by step with apps/perf/client I just move functions(rename as xx_rte_mempool_populate_default etc..) mtcp/src/memory_mgt.c, then complie , so that I can get more information.
I solved the problem by changing the client.conf. Like this:
rcvbuf = 16384
sndbuf = 2048
[DEBUG] Creating thread context... [ MPCreate: 173] Can't allocate memory for mempool!