mtcp-stack / mtcp

mTCP: A Highly Scalable User-level TCP Stack for Multicore Systems
Other
2k stars 436 forks source link

[ MPCreate: 173] Can't allocate memory for mempool! #309

Open rohitjo opened 4 years ago

rohitjo commented 4 years ago
[DEBUG] Initializing mtcp...
---------------------------------------------------------------------------------
Loading mtcp configuration from : client.conf
Loading interface setting
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Some devices want iova as va but pa will be used because.. EAL: IOMMU does not support IOVA as VA
EAL: No free hugepages reported in hugepages-1048576kB
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:1533 net_e1000_igb
EAL: using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(2)
EAL: Auto-detected process type: PRIMARY
Configurations:
Number of CPU cores available: 1
Number of CPU cores to use: 1
Maximum number of concurrency per core: 10000
Maximum number of preallocated buffers per core: 10000
Receive buffer size: 6291456
Send buffer size: 4194304
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics:
---------------------------------------------------------------------------------
Interfaces:
Number of NIC queues: 1
---------------------------------------------------------------------------------
Loading routing configurations from : config/route.conf
fopen: No such file or directory
Skip loading static routing table
Routes:
(blank)
---------------------------------------------------------------------------------
Loading ARP table from : config/arp.conf
fopen: No such file or directory
Skip loading static ARP table
ARP Table:
(blank)
---------------------------------------------------------------------------------
Checking link statusdone
Configuration updated by mtcp_setconf().

[DEBUG] Creating thread context... [ MPCreate: 173] Can't allocate memory for mempool!

rohitjo commented 4 years ago

cat /proc/meminfo | grep Huge | AnonHugePages: 0 kB | ShmemHugePages: 0 kB | FileHugePages: 0 kB | HugePages_Total: 8192 | HugePages_Free: 8180 | HugePages_Rsvd: 0 | HugePages_Surp: 0 | Hugepagesize: 2048 kB | Hugetlb: 16777216 kB

MrBean818 commented 3 years ago

I met the same problem today. ./client wait 10.0.0.1 1234 100 then exit which same error. Failed in tcp_send_buffer.c:42 sbm->mp = (mem_pool_t)MPCreate(pool_name, chunk_size, (uint64_t)chunk_size * cnum)

root@ckun-MS-1:~/github/mtcp/apps/perf# ./client wait 10.0.0.1 1234 100
[DEBUG] Wait mode
Configuration updated by mtcp_setconf().
[DEBUG] Initializing mtcp...
Loading mtcp configuration from : client.conf
Loading interface setting
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:158b net_i40e
PMD: Global register is changed during support QinQ parser
PMD: Global register is changed during configure hash input set
PMD: Global register is changed during configure fdir mask
PMD: Global register is changed during configure hash mask
PMD: Global register is changed during support QinQ cloud filter
PMD: Global register is changed during disable FDIR flexible payload
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:158b net_i40e
EAL: Auto-detected process type: PRIMARY
Configurations:
Number of CPU cores available: 1
Number of CPU cores to use: 1
Maximum number of concurrency per core: 2
Maximum number of preallocated buffers per core: 2
Receive buffer size: 6291456
Send buffer size: 4194304
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics:
"---------------------------------------------------------------------------------
Interfaces:
Number of NIC queues: 1
---------------------------------------------------------------------------------
Loading routing configurations from : config/route.conf
fopen: No such file or directory
Skip loading static routing table
Routes:
(blank)
---------------------------------------------------------------------------------
Loading ARP table from : config/arp.conf
fopen: No such file or directory
Skip loading static ARP table
ARP Table:
(blank)
---------------------------------------------------------------------------------

Checking link statusdone
Configuration updated by mtcp_setconf().
[DEBUG] Creating thread context...
[  MPCreate: 173] Can't allocate memory for mempool!
root@ckun-MS-1:~/github/mtcp/apps/perf# 
root@ckun-MS-1:~/github/mtcp/apps/perf# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:   16384
HugePages_Free:    16370
HugePages_Rsvd:        

HugePages_Surp:        0 
MrBean818 commented 3 years ago

located the failing in rte_mempool.c:1066 if (rte_mempool_populate_default(mp) < 0) goto fail;

MrBean818 commented 3 years ago

located in memzone_reserve_aligned_thread_unsafe rte_errno == EINVAL

So far I didn't get the root cause , maybe some config is needed. anyone can help?

Actually I don't know how to use gdb to debug the functions in dpdk step by step with apps/perf/client I just move functions(rename as xx_rte_mempool_populate_default etc..) mtcp/src/memory_mgt.c, then complie , so that I can get more information.

lyd19997 commented 3 years ago

"[ MPCreate: 173] Can't allocate memory for mempool!". located the failing in if (rte_mempool_populate_default(mp) < 0) goto fail;

I solved the problem by changing the client.conf. Like this:

Receive buffer size of sockets

rcvbuf = 6291456

rcvbuf = 16384

Send buffer size of sockets

sndbuf = 2048

sndbuf = 4194304

sndbuf = 41943040

sndbuf = 146000