hpcn-uam / DPDK2disk

DPDK packet capture into PCAP files. Tested up to 40Gbps
MIT License
22 stars 12 forks source link

[EXTERNAL] Reservando recursos...Malloc - ERROR! Please help #1

Closed missyoyo closed 7 months ago

missyoyo commented 5 years ago

I have test dpdk2disk with default version dpdk inlude this. but get error message like this.Pleas help this issue. root@knot-onesys:/home/dhb/DPDK2disk# ./scripts/capture0.sh /home/dhb/ EAL: Detected 8 lcore(s) EAL: Probing VFIO support... EAL: PCI device 0000:02:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:01.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:02.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:03.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:03:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:10fb net_ixgbe [EXTERNAL] Se define un WSlave en el Core 4 [EXTERNAL] Carpeta establecida a /home/dhb/. [EXTERNAL] Tamaño maximo de fichero establecido a 4294967296 bytes Creating the mbuf pool for socket 0 ... Creating ring to connect I/O lcore 1 (socket 0) with worker lcore 3 ... Creating ring to connect worker lcore 3 with TX port 0 (through I/O lcore 2) (socket 0) ... Initializing NIC port 0 ... Initializing NIC port 0 RX queue 0 ... Initializing NIC port 0 TX queue 0 ... PMD: ixgbe_dev_link_status_print(): Port 0: Link Down

Checking link status.................................done Port 0 Link Up - speed 1000 Mbps - full-duplex [EXTERNAL] Reservando recursos...Malloc - ERROR! /home/dhb/DPDK2disk root@knot-onesys:/home/dhb/DPDK2disk#

ralequi commented 5 years ago

Hi, First of all, sorry for that "in spanish" error message. The problem is that this app uses LOTS of ram, see https://github.com/hpcn-uam/DPDK2disk/blob/master/src/external.c at line 162.

It is trying to allocate sizeof(BUFFERKIND)*diskBufferSize witch means 1*1024*1024*1024 bytes (1GiB) 4 times (4GiB in total, sometimes caused by dpdk overheads it can be extended up to 8GiB) Show me your hugepage setup.

Also I do not recommend to use /home as output directory, you probably want to use some kind of raid 0 or NVMe array, specially if you want to capture >5Gbit or so.

missyoyo commented 5 years ago

Hi ralequi: That is very kind of you to help me , as I was new to test DPDK, and my test lab use Vmware Esxi 6.5 run ubuntu 16.04 with Vxnet3 and two intel X520(pci passthrougth).I have other test server with SSD installed for more fast capture ability, but not ready by now. so this is my first test.I alway get DPDK errror with VM. and I may not repeat this issue. Here is steps I try: 1.check DPDK version(seems the same with DPDK2disk git) root@knot-onesys:/home/dhb/DPDK2disk/dpdk# pwd /home/dhb/DPDK2disk/dpdk root@knot-onesys:/home/dhb/DPDK2disk/dpdk# git branch

Removing currently reserved hugepages Unmounting /mnt/huge and removing directory

Input the number of 2048kB hugepages for each node Example: to have 128MB of hugepages available per node in a 2MB huge page system, enter '64' to reserve 64 * 2MB pages on each node Number of pages for node0: 2048 Reserving hugepages Creating /mnt/huge and mounting as hugetlbfs

Press enter to continue ... 4.check hugepages

cat /proc/meminfo | grep Huge

AnonHugePages: 8192 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB 5.check interface

dpdk_nic_bind --status

Network devices using DPDK-compatible driver

0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe 0000:0b:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe 6.try test capture

./scripts/capture0.sh /home/dhb/

I get error message : EAL: Detected 8 lcore(s) EAL: lcore 8 unavailable EAL: invalid coremask

Here is my cpu layout: ./dpdk/usertools/cpu_layout.py

Core and Socket Information (as reported by '/sys/devices/system/cpu')

cores = [0, 1, 2, 3, 4, 5, 6, 7] sockets = [0]

   Socket 0   
   --------   

Core 0 [0]
Core 1 [1]
Core 2 [2]
Core 3 [3]
Core 4 [4]
Core 5 [5]
Core 6 [6]
Core 7 [7]
I have "vi ./scripts/capture0.sh" change "sudo build/app/hpcn_n2d -c FFF" to "sudo build/app/hpcn_n2d -c 0xff" (I am not sure if this is right?)

  1. try ./scripts/capture0.sh /home/dhb/

get error messge:

EAL: Detected 8 lcore(s) EAL: Probing VFIO support... EAL: PCI device 0000:02:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:03:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:10fb net_ixgbe EAL: PCI device 0000:0b:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:10fb net_ixgbe EAL: PCI device 0000:13:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 15ad:7b0 net_vmxnet3 EAL: PCI device 0000:1b:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 15ad:7b0 net_vmxnet3 [EXTERNAL] Se define un WSlave en el Core 4 [EXTERNAL] Carpeta establecida a /home/dhb/. [EXTERNAL] Tamaño maximo de fichero establecido a 4294967296 bytes Creating the mbuf pool for socket 0 ... Creating ring to connect I/O lcore 1 (socket 0) with worker lcore 3 ... Creating ring to connect worker lcore 3 with TX port 0 (through I/O lcore 2) (socket 0) ... Initializing NIC port 0 ... Initializing NIC port 0 RX queue 0 ... Initializing NIC port 0 TX queue 0 ... PMD: ixgbe_dev_link_status_print(): Port 0: Link Down

Checking link status..........................done Port 0 Link Up - speed 1000 Mbps - full-duplex [EXTERNAL] Reservando recursos...Malloc - ERROR! /home/dhb/DPDK2disk

Thank you for your help.

ralequi commented 5 years ago

Hi, Sorry to take too long on response. I absolutely forget about it :-(. (too much work to remember everything u probably know...)

And to conclude.... If you are going to use this in an academic work, I'm open to work with you in deep with the simplest condition of being a co-author ;-)