P4-vSwitch / vagrant

PISCES Simulation Environment
30 stars 15 forks source link

Pktgen error #6

Closed imehrdad2012 closed 7 years ago

imehrdad2012 commented 7 years ago

Hello,

When I run "sudo ./app/app/x86_64-native-linuxapp-gcc/app/pktgen -c 1 -n 4 -- -P -m "1.0" -f /vagrant/examples/l2_switch/generator.pkt", I get the following error:

EAL: Detected lcore 0 as core 0 on socket 0 EAL: Support maximum 128 logical core(s) by configuration. EAL: Detected 1 lcore(s)$ EAL: No free hugepages reported in hugepages-2048kB PANIC in rte_eal_init():$ Cannot get hugepage information 6: [./app/app/x86_64-native-linuxapp-gcc/app/pktgen() [0x4325a3]] 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f91e70eef45]] 4: [./app/app/x86_64-native-linuxapp-gcc/app/pktgen(main+0x151) [0x430d31]] 3: [./app/app/x86_64-native-linuxapp-gcc/app/pktgen(rte_eal_init+0xbd4) [0x4ecbd4]] 2: [./app/app/x86_64-native-linuxapp-gcc/app/pktgen(__rte_panic+0xc1) [0x42c18f]] 1: [./app/app/x86_64-native-linuxapp-gcc/app/pktgen(rte_dump_stack+0x18) [0x4f3428]]

I checked fstab: vim /etc/fstab vagrant@vagrant:~$ sudo grep -i huge /proc/meminfo AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB

I know it is not directly to the P4 soft switch but any idea on how should I fix this is appreciated.

imehrdad2012 commented 7 years ago

Thanks for the info. For some reasons, 'vagrant up' did not install the pktgen properly. I was able to fix that issue.

However, my current problem is no packets are sent out from the generator VM interface after I run "start 0". The following is my pktgen output. I wonder if you see anything weird? Thanks a lot for helping me get your code running.

\ | P-------------:0 P-------------:0 \ 0


Lua 5.3.0 Copyright (C) 1994-2015 Lua.org, PUC-Rio

Packet Burst 32, RX Desc 512, TX Desc 512, mbufs/port 4096, mbuf cache 512

=== port to lcore mapping table (# lcores 1) === lcore: 0 port 0: D: T = 1: 1 Total : 0: 0 Display and Timer on lcore 0, rx:tx counts per port/lcore

Configuring 1 ports, MBUF Size 1920, MBUF Cache Size 512 Lcore: 1, RX-TX
RX( 1): ( 0: 0) TX( 1): ( 0: 0)

Port : 0, nb_lcores 1, private 0x89a210, lcores: 1

Dev Info (rte_em_pmd:0) max_vfs : 0 min_rx_bufsize : 256 max_rx_pktlen : 16128 max_rx_queues : 1 max_tx_queues: 1 max_mac_addrs : 15 max_hash_mac_addrs: 0 max_vmdq_pools: 0 rx_offload_capa: 0 tx_offload_capa : 0 reta_size : 0 flow_type_rss_offloads:0000000000000000 vmdq_queue_base: 0 vmdq_queue_num : 0 vmdq_pool_base: 0 RX Conf pthreash : 0 hthresh : 0 wthresh : 0 Free Thresh : 0 Drop Enable : 0 Deferred Start : 0 TX Conf pthreash : 0 hthresh : 0 wthresh : 0 Free Thresh : 0 RS Thresh : 0 Deferred Start : 0 TXQ Flags:00000000

Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 08:00:27:c2:4c:cf Create: Default RX 0:0 - Memory used (MBUFs 4096 x (size 1920 + Hdr 128)) + 1581248 = 9737 KB PMD: eth_em_rx_queue_setup(): sw_ring=0x7fb7e4921780 hw_ring=0x7fb7e4922880 dma_addr=0x4ab22880

Create: Default TX  0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr 128)) + 1581248 =   9737 KB
Create: Range TX    0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr 128)) + 1581248 =   9737 KB
Create: Sequence TX 0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr 128)) + 1581248 =   9737 KB
Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1920 + Hdr 128)) + 1581248 =   1673 KB

PMD: eth_em_tx_queue_setup(): sw_ring=0x7fb7e4048bc0 hw_ring=0x7fb7e404acc0 dma_addr=0x4c84acc0

                                                                   Port memory used =  40617 KB
                                                                  Total memory used =  40617 KB

PMD: eth_em_rx_init(): forcing scatter mode PMD: eth_em_start(): << Port 0: Link Up - speed 1000 Mbps - full-duplex

=== Display processing on lcore 0

Ports 0-0 of 1
Copyright (c) <2010-2015>, Wind River Systems Flags:Port : P-------------:0 Link State : ---TotalRate--- Pkts/s Rx : 0 0 Tx : 0 0 MBits/s Rx/Tx : 0/0 0/0 Broadcast : 0 Multicast : 0 64 Bytes : 0 65-127 : 0 128-255 : 0 256-511 : 0 512-1023 : 0 1024-1518 : 0 Runts/Jumbos : 0/0 Errors Rx/Tx : 0/0 Total Rx Pkts : 348 Tx Pkts : 0 Rx MBs : 0 Tx MBs : 0 ARP/ICMP Pkts : 0/0

Tx Count/% Rate : Forever / 100% PktSize/Tx Burst: 64 / 32 Src/Dest Port : 1234 / 5678 Pkt Type:VLAN ID: IPv4 / TCP:0001 Dst IP Address : 192.168.1.1 Src IP Address : 192.168.0.1/24 Dst MAC Address : 08:00:27:7e:0b:95 Src MAC Address : 08:00:27:c2:4c:cf -- Pktgen Ver:2.9.5(DPDK-2.1.0) Powered by Intel® DPDK -----------------------

Pktgen > Pktgen> set mac 0 08:00:27:7e:0b:95 Pktgen> start 0

imehrdad2012 commented 7 years ago

Thanks for the response. I was able to address the issue. For other people that might face the same thing in the future, I describe the solution. (1) VirtualBox 4.3 could not allocate sufficient number of CPUs to VMs on my server. Instead, it always assigned only one CPU to the generator, switch, and receiver VMs. This causes an issue because DPDK requires at least 2 CPUs to perform a meaningful send/receive operation. I installed the VirtualBox 5.1 and latest kernel that was able to assign the desired number of CPUs to the VMS. (2) This latest version of VirtualBox is not compatible with the default Vagrant. I also installed Vagrant from source. I would like to mention this issue was not specific to one server, I tried with four other servers and they all had the same problem while hyperthreading and VT-x were enabled on them. One question: is there any way (or script) that I can use to measure the latency of your software P4 switch? I would like to exclude the buffering and DPDK overheads at the generator and receiver.