mtcp-stack / mtcp

mTCP: A Highly Scalable User-level TCP Stack for Multicore Systems
Other
1.98k stars 435 forks source link

MPCreate: 173 Can't allocate memory for mempool! #281

Closed vincentmli closed 4 years ago

vincentmli commented 4 years ago

I am running mTCP in VMware ESXi VM, I noticed once I increased the number of core to 2 for epwget, I got [ MPCreate: 173] Can't allocate memory for mempool!, use number of core 1 is fine:


#  ./apps/example/epwget 10.1.72.68 160000000 -f /etc/mtcp/config/epwget.conf -N 2 -c 160
Configuration updated by mtcp_setconf().
Application configuration:
URL: /
# of total_flows: 160000000
# of cores: 2
Concurrency: 160
---------------------------------------------------------------------------------
Loading mtcp configuration from : /etc/mtcp/config/epwget.conf
Loading interface setting
EAL: Detected 16 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
Total number of attached devices: 1
Interface name: dpdk0
EAL: Auto-detected process type: PRIMARY
Configurations:
Number of CPU cores available: 2
Number of CPU cores to use: 2
Number of TX ring descriptor: 512
Number of RX ring descriptor: 128
Number of source ip to use: 8
Maximum number of concurrency per core: 1000000
Maximum number of preallocated buffers per core: 1000000
Receive buffer size: 1024
Send buffer size: 1024
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics: dpdk0
---------------------------------------------------------------------------------
Interfaces:
name: dpdk0, ifindex: 0, hwaddr: 00:50:56:86:10:76, ipaddr: 10.1.72.28, netmask: 255.255.0.0
Number of NIC queues: 2
---------------------------------------------------------------------------------
Loading routing configurations from : config/route.conf
Routes:
Destination: 10.1.0.0/16, Mask: 255.255.0.0, Masked: 10.1.0.0, Route: ifdx-0

---------------------------------------------------------------------------------
Loading ARP table from : config/arp.conf
ARP Table:
IP addr: 10.1.72.68, dst_hwaddr: 00:50:56:86:22:BA

---------------------------------------------------------------------------------
Initializing port 0... Ethdev port_id=0 tx_queue_id=0, new added offloads 0x8011 must be within pre-queue offload capabilities 0x0 in rte_eth_tx_queue_setup()

Ethdev port_id=0 tx_queue_id=1, new added offloads 0x8011 must be within pre-queue offload capabilities 0x0 in rte_eth_tx_queue_setup()

done: 
rte_eth_dev_flow_ctrl_get: Function not supported
[dpdk_load_module: 765] Failed to get flow control info!
rte_eth_dev_flow_ctrl_set: Function not supported
[dpdk_load_module: 772] Failed to set flow control info!: errno: -95

Checking link statusdone
Port 0 Link Up - speed 10000 Mbps - full-duplex
Configuration updated by mtcp_setconf().
[  MPCreate: 173] Can't allocate memory for mempool!

if I olnly give -N 1, it runs fine:


#  ./apps/example/epwget 10.1.72.68 160000000 -f /etc/mtcp/config/epwget.conf -N 1 -c 160
Configuration updated by mtcp_setconf().
Application configuration:
URL: /
# of total_flows: 160000000
# of cores: 1
Concurrency: 160
---------------------------------------------------------------------------------
Loading mtcp configuration from : /etc/mtcp/config/epwget.conf
Loading interface setting
EAL: Detected 16 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
Total number of attached devices: 1
Interface name: dpdk0
EAL: Auto-detected process type: PRIMARY
Configurations:
Number of CPU cores available: 1
Number of CPU cores to use: 1
Number of TX ring descriptor: 512
Number of RX ring descriptor: 128
Number of source ip to use: 8
Maximum number of concurrency per core: 1000000
Maximum number of preallocated buffers per core: 1000000
Receive buffer size: 1024
Send buffer size: 1024
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics: dpdk0
---------------------------------------------------------------------------------
Interfaces:
name: dpdk0, ifindex: 0, hwaddr: 00:50:56:86:10:76, ipaddr: 10.1.72.28, netmask: 255.255.0.0
Number of NIC queues: 1
---------------------------------------------------------------------------------
Loading routing configurations from : config/route.conf
Routes:
Destination: 10.1.0.0/16, Mask: 255.255.0.0, Masked: 10.1.0.0, Route: ifdx-0

---------------------------------------------------------------------------------
Loading ARP table from : config/arp.conf
ARP Table:

IP addr: 10.1.72.68, dst_hwaddr: 00:50:56:86:22:BA

---------------------------------------------------------------------------------
Initializing port 0... Ethdev port_id=0 tx_queue_id=0, new added offloads 0x8011 must be within pre-queue offload capabilities 0x0 in rte_eth_tx_queue_setup()

done: 
rte_eth_dev_flow_ctrl_get: Function not supported
[dpdk_load_module: 765] Failed to get flow control info!
rte_eth_dev_flow_ctrl_set: Function not supported
[dpdk_load_module: 772] Failed to set flow control info!: errno: -95

Checking link statusdone
Port 0 Link Up - speed 10000 Mbps - full-duplex
Configuration updated by mtcp_setconf().
CPU 0: initialization finished.
[mtcp_create_context:1359] CPU 0 is now the master thread.
[CPU 0] dpdk0 flows:      0, RX:      38(pps) (err:     0),  0.00(Gbps), TX:       0(pps),  0.00(Gbps)
[ ALL ] dpdk0 flows:      0, RX:      38(pps) (err:     0),  0.00(Gbps), TX:       0(pps),  0.00(Gbps)
[CPU 0] dpdk0 flows:      0, RX:      17(pps) (err:     0),  0.00(Gbps), TX:       0(pps),  0.00(Gbps)
[ ALL ] dpdk0 flows:      0, RX:      17(pps) (err:     0),  0.00(Gbps), TX:       0(pps),  0.00(Gbps)
Thread 0 handles 160000000 flows. connecting to 10.1.72.68:80
rte_eth_stats_reset: Function not supported
Response size set to 86
[ ALL ] connect:   13932, read:    1 MB, write:    1 MB, completes:   13772 (resp_time avg: 3552, max: 211051 us)
[CPU 0] dpdk0 flows:    231, RX:   72344(pps) (err:     0),  0.07(Gbps), TX:   91437(pps),  0.08(Gbps)
[ ALL ] dpdk0 flows:    231, RX:   72344(pps) (err:     0),  0.07(Gbps), TX:   91437(pps),  0.08(Gbps)
rte_eth_stats_reset: Function not supported

ajamshed commented 4 years ago

@vincentmli,

It looks like you are running out of huge memory pages. Can you please assign more huge pages to the application?

vincentmli commented 4 years ago

Thanks Asim, you are correct, assigning more huge pages resolves the issue