Closed saimoon closed 8 years ago
How are you setting up packet-bricks? Are you using passing multiple interface names as command-line arguments to bricks-load-balance
?
I tested bricks in both way. From command line script:
bricks-load-balance enp4s0f0 2
or using bricks manually:
PktEngine.new("e0") lb = Brick.new("LoadBalancer", 2) lb:connect_input("enp4s0f0") lb:connect_output("enp4s0f0{0", "enp4s0f0{1") pe:link(lb) pe:start()
I test one interface each time (I never try bricks with both interfaces). Using eno1 it works. Using enp4s0f0 doesn't works.
Hmm. For enp4s0f0 case, can you please email me the (i) console output of packet-bricks, and (ii) dmesg log?
\ DMESG output **
[ 243.401206] 803.788211 [1192] generic_netmap_attach Created generic NA ffff880233e70800 (prev (null)) [ 243.478143] 803.865156 [ 395] generic_netmap_register Generic adapter ffff880233e70800 goes on [ 243.480109] 803.867124 [ 442] generic_netmap_register RX ring 0 of generic adapter ffff880233e70800 goes on [ 243.482363] 803.869378 [ 442] generic_netmap_register RX ring 1 of generic adapter ffff880233e70800 goes on [ 243.484791] 803.871807 [ 442] generic_netmap_register RX ring 2 of generic adapter ffff880233e70800 goes on [ 243.487012] 803.874028 [ 442] generic_netmap_register RX ring 3 of generic adapter ffff880233e70800 goes on [ 243.489224] 803.876240 [ 442] generic_netmap_register RX ring 4 of generic adapter ffff880233e70800 goes on [ 243.491446] 803.878462 [ 449] generic_netmap_register TX ring 0 of generic adapter ffff880233e70800 goes on [ 243.493813] 803.880830 [ 449] generic_netmap_register TX ring 1 of generic adapter ffff880233e70800 goes on [ 243.496046] 803.883063 [ 449] generic_netmap_register TX ring 2 of generic adapter ffff880233e70800 goes on [ 243.498266] 803.885284 [ 449] generic_netmap_register TX ring 3 of generic adapter ffff880233e70800 goes on [ 243.500498] 803.887515 [ 464] generic_qdisc_init Qdisc #0 initialized with max_len = 255 [ 243.502484] 803.889502 [ 464] generic_qdisc_init Qdisc #1 initialized with max_len = 255 [ 243.504628] 803.891646 [ 464] generic_qdisc_init Qdisc #2 initialized with max_len = 255 [ 243.506579] 803.893597 [ 464] generic_qdisc_init Qdisc #3 initialized with max_len = 255
\ BRICKS output **
root@server# ./bricks
[initBricks(): 171] <<< [initBricks(): 191] [ pmain(): line 466] Executing (null) [print_version(): line 348] BRICKS Version 0.5-beta bricks> pe = PktEngine.new("e0") bricks> lb = Brick.new("LoadBalancer", 2) bricks> lb:connect_input("enp4s0f0") bricks> lb:connect_output("enp4s0f0{0", "enp4s0f0{1") bricks> pe:link(lb) [createBrick(): 47] <<< [createBrick(): 58] [ lb_init(): line 66] Adding brick enp4s0f0{0 to the engine [brick_link(): 68] [ promisc(): line 96] Interface enp4s0f0 is already set to promiscuous mode 334.483482 nm_open [444] overriding ARG3 0 334.483496 nm_open [457] overriding ifname enp4s0f0 ringid 0x0 flags 0x1 [netmap_link_iface(): line 183] Wait for 2 secs for phy reset [brick_link(): line 113] Linking e0 with link enp4s0f0 with batch size: 512 and qid: -1 [netmap_create_channel(): line 746] brick: 0x932550, local_desc: 0x932ce0 336.483671 nm_open [444] overriding ARG3 0 [strcpy_with_reverse_pipe(): 146] <<< [strcpy_with_reverse_pipe(): 162] [netmap_create_channel(): line 781] zerocopy for enp4s0f0 --> enp4s0f0{0 (index: 0) enabled [netmap_create_channel(): line 786] Created netmap:enp4s0f0 interface [netmap_create_channel(): line 746] brick: 0x932550, local_desc: 0x932ce0 336.483706 nm_open [444] overriding ARG3 0 [strcpy_with_reverse_pipe(): 146] <<< [strcpy_with_reverse_pipe(): 162] [netmap_create_channel(): line 781] zerocopy for enp4s0f0 --> enp4s0f0{1 (index: 1) enabled [netmap_create_channel(): line 786] Created netmap:enp4s0f0 interface <<< [brick_link(): 140] bricks> pe:start() bricks> BRICKS.show_stats()
ENGINE STATISTICS Engine Packet Cnt Byte Cnt Packet Drop e0 167181 128536285 0 Total 167181 128536285
[After some seconds]
bricks> BRICKS.show_stats()
ENGINE STATISTICS Engine Packet Cnt Byte Cnt Packet Drop e0 206432 159975756 0 Total 206432 159975756
As you can see, packets arrives....
Very strange. Can you please try renaming the interface so that it has a smaller string length? I see that the current interface name (netmap:enp4s0f0{1) is exceeding IFNAMSIZ (16).
For Linux: try ifrename
For FreeBSD: try ifconfig
It works. Thanks for your suggestion.
As note: ip link set enp4s0f0 name eno3
netmap:enp4s0f0 is 15char netmap:enp4s0f0}0 is 17char, so out of IFNAMSIZ
This can be a limit.
I scanned through the packet-bricks code and it looks like the restriction is coming from the netmap kernel module. I will send Luigi an email about this.
Hello, I've the following net-cards on my server:
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20) 02:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20) 04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) 04:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
They are called respectively enp4s0f0, enp4s0f1, eno1, eno2
If I start bricks load-balance on eno2 it works well and I read all packets using netmap pkt-gen on eno2}0 etc...
Instead when I start it on the other two interfaces (enp4s0f0/1) I have the following result checking it with netmap pkt-gen:
root@server:~# pkt-gen -i netmap:enp4s0f0}0 -f rx 309.504082 main [2234] interface is netmap:enp4s0f0}0 309.504127 main [2354] running on 1 cpus (have 4) 309.504293 extract_ip_range [364] range is 10.0.0.1:0 to 10.0.0.1:0 309.504301 extract_ip_range [364] range is 10.1.0.1:0 to 10.1.0.1:0 309.504373 main [2455] mapped 334980KB at 0x7f869c15c000 Receiving from netmap:enp4s0f0}0: 1 queues, 1 threads and 1 cpus. 309.504396 main [2554] Wait 2 secs for phy reset 311.504508 main [2556] Ready... 311.504587 receiver_body [1376] reading from netmap:enp4s0f0}0 fd 3 main_fd 3 312.504593 main_thread [2019] 0.000 pps (0.000 pkts 0.000 bps in 1000028 usec) 0.00 avg_batch 0 min_space 312.505644 receiver_body [1383] waiting for initial packets, poll returns 0 0 313.505668 main_thread [2019] 0.000 pps (0.000 pkts 0.000 bps in 1001075 usec) 0.00 avg_batch 99999 min_space 313.506697 receiver_body [1383] waiting for initial packets, poll returns 0 0 314.506743 main_thread [2019] 0.000 pps (0.000 pkts 0.000 bps in 1001075 usec) 0.00 avg_batch 99999 min_space 314.507749 receiver_body [1383] waiting for initial packets, poll returns 0 0 ^C314.690944 sigint_h [401] received control-C on thread 0x7f86b1281700 314.690960 main_thread [2019] 0.000 pps (0.000 pkts 0.000 bps in 184217 usec) 0.00 avg_batch 99999 min_space 315.508825 receiver_body [1383] waiting for initial packets, poll returns 0 0 315.692035 main_thread [2019] 0.000 pps (0.000 pkts 0.000 bps in 1001075 usec) 0.00 avg_batch 0 min_space Received nothing.
No packet received. If I run pkt-gen on netmap:enp4s0f0 without bricks it receive packets well.
It's pretty strange. The interfaces have the same driver and same firmware (are pretty the same). Have you some advice or same idea can help me with this issue?
Thanks for support Simone