luigirizzo / netmap

Automatically exported from code.google.com/p/netmap
BSD 2-Clause "Simplified" License
1.86k stars 536 forks source link

netmap_bwrap_intr_notify how strange, interrupt with no packets on #580

Closed xiaojin2630 closed 5 years ago

xiaojin2630 commented 5 years ago
# uname -sa
FreeBSD default 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r261907:261909M: Sun Feb 16 18:13:15 PST 2014     luigi@luigi-bsd9:/home/luigi/FreeBSD/pico9/build_dir-qemu-amd64/PICOBSD-qemu  amd64

# ./vale-ctl -a vale0:igb0
# ./vale-ctl -a vale0:igb1
# ./vale-ctl -a vale0:igb2
# ./vale-ctl -a vale0:igb3

# sysctl dev.netmap.fwd
dev.netmap.fwd: 1
# sysctl dev.netmap.verbose
dev.netmap.verbose: 1

after send pkts:

871.715493 [1796] netmap_bwrap_intr_notify  how strange, interrupt with no packets on igb0
871.716021 [2505] netmap_common_irq         received TX queue 0
871.716305 [1737] netmap_bwrap_intr_notify  igb0 TX0 0x0
871.767149 [2505] netmap_common_irq         received RX queue 0
871.767433 [1737] netmap_bwrap_intr_notify  igb2 RX0 0x0
871.767709 [1777] netmap_bwrap_intr_notify  igb2 head 128 cur 128 tail 128 (kring 128 128 128)
871.768242 [1796] netmap_bwrap_intr_notify  how strange, interrupt with no packets on igb2
871.768770 [1737] netmap_bwrap_intr_notify  igb2 TX0 0x0
871.942146 [1737] netmap_bwrap_intr_notify  igb1 RX0 0x0
871.942425 [1777] netmap_bwrap_intr_notify  igb1 head 143 cur 143 tail 143 (kring 143 143 143)
871.942958 [1796] netmap_bwrap_intr_notify  how strange, interrupt with no packets on igb1
871.943486 [1737] netmap_bwrap_intr_notify  igb1 TX0 0x0
xiaojin2630 commented 5 years ago

Is there a problem with where I configure it?

vmaffione commented 5 years ago

It looks like you are testing code that is almost 5 years old. Maybe you want to use the latest FreeBSD 11/stable or 12/stable? Also, why are you setting dev.netmap.fwd to 1?

xiaojin2630 commented 5 years ago

Thank you for your reply. You're right, it's the older code.

I'm using the PICOBSD image on the Netmap author site. As soon as I started compiling Netmap.ko Igb.ko E1000e.ko under Linux, there was a problem forwarding messages after we created Vale It always feels unstable.

Linux kernel also tried the 4.10.10 is also a problem, including a few kernel version can not use IGB and e1000e, so it turned to use PICOBSD for testing. I've been trying to get a high-performance switch lately and it's been a few months and it's been being tested.

I tried Linux bridge, but its performance was so poor that I wanted to use Netmap Vale's. I will take the time to test FreeBSD 11/stable or freebsd12/stable as soon as possible.

If you have any more problems, you will have to ask you again. Forwarding traffic was unsuccessful, and I thought I was going to turn on a switch, similar to Linux's sysctl-w. forwarding = 1. That's why I set up dev.netmap.fwd=1, and I don't know exactly what FWD means.

vmaffione commented 5 years ago

The netmap fwd sysctl (or module parameter on Linux) is not related to VALE forwarding. It's related to netmap transparent mode, which is something completely different, so please reset that sysctl to 0.

If you find problems with attaching VALE to igb or e1000 interfaces (on Linux, using the latest code), please report them and maybe we can help. What do you mean by "it feels unstable"?

xiaojin2630 commented 5 years ago

Sorry, I tried freebsd-12.0-RELEASE today, because my network card is intel 82583V, I found that 12.0 does not seem to support this network card, so I returned to use linux. Below I will describe the usage on Linux in detail.

xiaojin2630 commented 5 years ago
Netmap.git # git pull

LINUX # git describe --always
Dd803f78

LINUX # ./configure --kernel-dir=/home/xiaoj/tmp/linux-4.10.10 --drivers=igb,e1000e
...
Kernel directory /home/xiaoj/tmp/linux-4.10.10
Kernel sources /home/xiaoj/tmp/linux-4.10.10
Linux version 40a0a [4.10.10]
Module file netmap.ko

Subsystems null ptnetmap generic monitor pipe vale
Apps vale-ctl nmreplay tlem lb bridge pkt-gen
Native drivers igb e1000e

Contents of the drivers.mak file:

Igb@conf := CONFIG_IGB
Igb@src := tar xf /home/xiaoj/tmp/netmap-off.git/LINUX/ext-drivers/igb-5.3.5.20.tar.gz && ln -s igb-5.3.5.20/src igb
Igb@patch := patches/intel--igb--5.3.5.20
E1000e@conf := CONFIG_E1000E
E1000e@src := tar xf /home/xiaoj/tmp/netmap-off.git/LINUX/ext-drivers/e1000e-3.4.0.2.tar.gz && ln -s e1000e-3.4.0.2/src e1000e
E1000e@patch := patches/intel--e1000e--3.4.0.2
  ~ # uname -sa
Linux nm 4.10.10 #2 SMP Wed Jan 9 13:15:31 CST 2019 i686 GNU/Linux

~ # dmesg|grep -i netmap
Netmap: loading out-of-tree module taints kernel.
966.339703 [4167] netmap_init run mknod /dev/netmap c 10 57 # returned 0
966.339714 [4183] netmap_init netmap: loaded module
Net eth0: netmap queues/slots: TX 1/256, RX 1/256
Net eth1: netmap queues/slots: TX 1/256, RX 1/256
Net eth2: netmap queues/slots: TX 1/256, RX 1/256
Net eth3: netmap queues/slots: TX 1/256, RX 1/256
Net eth4: netmap queues/slots: TX 1/256, RX 1/256
Net eth5: netmap queues/slots: TX 1/256, RX 1/256
Net eth6: netmap queues/slots: TX 1/256, RX 1/256
Net eth7: netmap queues/slots: TX 1/256, RX 1/256
Net eth8: netmap queues/slots: TX 1/256, RX 1/256
Net eth9: netmap queues/slots: TX 1/256, RX 1/256
Data # cat /sys/module/netmap/parameters/verbose
1
Data # cat /sys/module/netmap/parameters/fwd
0
The following command is executed without problems.
 Data # ./vale-ctl -a vale0:eth6
 Data # ./vale-ctl -a vale0:eth7
 Data # ./vale-ctl -a vale0:eth8
 Data # ./vale-ctl -a vale0:eth9
# dmesg
322.473245 [1483] netmap_finalize_obj_allocator Pre-allocated 25 clusters (4/100KB) for 'netmap_if'
322.474806 [1483] netmap_finalize_obj_allocator Pre-allocated 200 clusters (36/7200KB) for 'netmap_ring'
322.517974 [1483] netmap_finalize_obj_allocator Pre-allocated 81920 clusters (4/327680KB) for 'netmap_buf'
322.517982 [ 428] netmap_init_obj_allocator_bitmap netmap_if free 100
322.517983 [ 428] netmap_init_obj_allocator_bitmap netmap_ring free 200
322.518406 [ 428] netmap_init_obj_allocator_bitmap netmap_buf free 163840
322.518407 [1645] netmap_mem_finalize_all interfaces 100 KB, rings 7200 KB, buffers 320 MB
322.518408 [1648] netmap_mem_finalize_all Free buffers: 163838
322.519733 [ 786] netmap_update_config configuration changed for eth6: txring 1 x 256, rxring 1 x 256, rxbufsz 2048
322.519734 [ 786] netmap_update_config configuration changed for vale0:eth6: txring 1 x 256, rxring 1 x 256, rxbufsz 2048
322.519736 [1871] netmap_interp_ringid vale0:eth6: tx [0,1) rx [0,1) id 0
Igb 0000:01:00.0 eth6: igb: eth6 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
322.608464 [1145] netmap_bwrap_intr_notify how strange, interrupt with no packets on eth6
324.192491 [ 786] netmap_update_config configuration changed for eth7: txring 1 x 256, rxring 1 x 256, rxbufsz 2048
324.192495 [ 786] netmap_update_config configuration changed for vale0:eth7: txring 1 x 256, rxring 1 x 256, rxbufsz 2048
324.192496 [1871] netmap_interp_ringid vale0:eth7: tx [0,1) rx [0,1) id 0
Igb 0000:01:00.1 eth7: igb: eth7 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
324.285121 [1145] netmap_bwrap_intr_notify how strange, interrupt with no packets on eth7
324.765244 [1145] netmap_bwrap_intr_notify how strange, interrupt with no packets on eth6
325.141326 [ 786] netmap_update_config configuration changed for eth8: txring 1 x 256, rxring 1 x 256, rxbufsz 2048
325.141330 [ 786] netmap_update_config configuration changed for vale0:eth8: txring 1 x 256, rxring 1 x 256, rxbufsz 2048
325.141331 [1871] netmap_interp_ringid vale0:eth8: tx [0,1) rx [0,1) id 0
Igb 0000:01:00.2 eth8: igb: eth8 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
325.308455 [1145] netmap_bwrap_intr_notify how strange, interrupt with no packets on eth8
325.991885 [ 786] netmap_update_config configuration changed for eth9: txring 1 x 256, rxring 1 x 256, rxbufsz 2048
325.991888 [ 786] netmap_update_config configuration changed for vale0:eth9: txring 1 x 256, rxring 1 x 256, rxbufsz 2048
325.991890 [1871] netmap_interp_ringid vale0:eth9: tx [0,1) rx [0,1) id 0
Igb 0000:01:00.3 eth9: igb: eth9 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
326.081798 [1145] netmap_bwrap_intr_notify how strange, interrupt with no packets on eth9
326.685123 [1145] netmap_bwrap_intr_notify how strange, interrupt with no packets on eth6
326.685251 [1145] netmap_bwrap_intr_notify how strange, interrupt with no packets on
vmaffione commented 5 years ago

Ok, but do the "how strange" messages persist forever, or it is just something that pops up while you are configuring your igb interfaces? And, is the traffic flowing across vale0 or not, i.e. is it working?

vmaffione commented 5 years ago

Also, in that configuration your igb ports must be set in promiscuous mode, because they are attached to an L2 switch, and therefore are not endpoints of packet flows.

# ip link set ensX promisc on

And, you should disable the offloadings on the ports (see LINUX/README).

xiaojin2630 commented 5 years ago

thanks The "how strange" messages are always there.

The four interfaces are connected to the four ports of ixia. I use ixia to load traffic. I have set promisc for 4 ports before configuring vale0, as follows

# ifconfig eth6
Eth6 Link encap: Ethernet HWaddr 4C: CC: 6A: 9B: A2: A4
           Link detect: FIBRE-Active, Full-duplex, 1000Mb/s
           Ethernet driver: igb (ver 5.3.5.20)
           Inet6 addr: fe80::4ecc:6aff:fe9b:a2a4/64 Scope:Link
           UP BROADCAST RUNNING PROMISC MULTICAST MTU: 1500 Metric: 1
           RX packets: 0 errors: 0 dropped: 0 overruns: 0 frame: 0
           TX packets: 8 errors: 0 dropped: 0 overruns: 0 carrier: 0
           Collisions: 0 txqueuelen: 1000
           RX bytes: 0 (0.0 B) TX bytes: 656 (656.0 B)

With this connection, I think the traffic is often vale0.

Just the offload you mentioned may not be set yet.

xiaojin2630 commented 5 years ago
# ./pkt-gen  -f rx -i vale0:eth6/t
# 965.816486 main [1964]: interface is vale0:eth6/t
# 965.816596 main [2085]: running on 1 cpus (have 2)
# 965.817039 extract_ip_range [366]: range is 10.0.0.1:0 to 10.0.0.1:0
# 965.817057 extract_ip_range [366]: range is 10.1.0.1:0 to 10.1.0.1:0
# 965.817082 main [2164]: g.ifname = vale0:eth6/t
# 965.818433 main [2187]: mapped 8280KB at 0xb6e51000
# 965.818451 main [2189]: nmreq: slot: tx = 256, rx = 256; ring: tx = 1, rx = 1
Receiving from vale0:eth6/t: 1 queues, 1 threads and 1 cpus.
# 965.818479 main [2277]: Wait 2 secs for phy reset
# 967.818554 main [2279]: Ready...
# 967.818631 receiver_body [1448]: reading from vale0:eth6/t fd 3 main_fd 3
# 968.819631 main_thread [1752]: 15136 pps (15.151 Kpkts 10.181 Mbps in 1001007 usec) 1.02 avg_batch
# 969.820679 main_thread [1752]: 14880 pps (14.896 Kpkts 10.010 Mbps in 1001048 usec) 1.00 avg_batch
^C# 970.679789 sigint_h [403]: received control-C on thread 0xb77126a0
# 970.680052 main_thread [1752]: 14876 pps (12.784 Kpkts 8.591 Mbps in 859373 usec) 1.00 avg_batch
# 971.681808 main_thread [1752]: 1 pps (1.000 pkts 672.000 bps in 1001756 usec) 1.00 avg_batch
Received 42832 packets 2569920 bytes 42478 events 60 bytes each in 2.86 seconds.

Use pkt-gen to see traffic coming.

xiaojin2630 commented 5 years ago

I checked the LINU/README After executing the following command, dmesg can still see "how strange" messages constantly.

for x in eth6 eth7 eth8 eth9
do
   ethtool -K ethx tx off rx off gso off tso off gro off lro off
done 
# ethtool -k eth6
Features for eth6:
rx-checksumming: off
tx-checksumming: off
    tx-checksum-ipv4: off
    tx-checksum-ip-generic: off [fixed]
    tx-checksum-ipv6: off
    tx-checksum-fcoe-crc: off [fixed]
    tx-checksum-sctp: off [fixed]
scatter-gather: on
    tx-scatter-gather: on
    tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
    tx-tcp-segmentation: off
    tx-tcp-ecn-segmentation: off [fixed]
    tx-tcp-mangleid-segmentation: off
    tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-gso-partial: off [fixed]
tx-sctp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
hw-tc-offload: off [fixed]
[ data ]# ethtool -k eth6|grep ': on'
scatter-gather: on
    tx-scatter-gather: on
rx-vlan-offload: on
tx-vlan-offload: on
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on [fixed]
[ data ]# ethtool -k eth7|grep ': on'
scatter-gather: on
    tx-scatter-gather: on
rx-vlan-offload: on
tx-vlan-offload: on
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on [fixed]
[ data ]# ethtool -k eth8|grep ': on'
scatter-gather: on
    tx-scatter-gather: on
rx-vlan-offload: on
tx-vlan-offload: on
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on [fixed]
[ data ]# ethtool -k eth9|grep ': on'
scatter-gather: on
    tx-scatter-gather: on
rx-vlan-offload: on
tx-vlan-offload: on
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on [fixed]
xiaojin2630 commented 5 years ago
for x in eth6 eth7 eth8 eth9
do
ethtool -K ${x} tx off rx off gso off tso off gro off lro off
done
vmaffione commented 5 years ago

In this scenario I've noticed some problems (with virtio-net NICs) that we will try to fix soon.

In your case, however, the "how strange" message is actually very old and it does not mean there is an issue. I have changed the code to rate limit that message. Do you see other specific issues apart from the message?

xiaojin2630 commented 5 years ago

Thank you for your reply. When I set up a vale0, found that Ixia issued traffic to the end is not received, showing almost all of the lost, I also can not think of the reason where, so the setting netmap verbose = 1, through dmesg to see a lot of "How strange" message , that's when the issue was submitted on GitHub. The problem now is the formation of vale0, including 4 igb interfaces, traffic is not working.

vmaffione commented 5 years ago

Ok, so the problem is that you attach 4 igb interfaces to vale0, and that does not work. We will have a look asap. I did a test with e1000 interfaces in the VM and things seemed to work. In any case you can forget about the log message, that's not necessarily relevant.

xiaojin2630 commented 5 years ago

Regarding my concern about log messages, what you said makes sense, thank you for your answers; Why is my problem because four igb interfaces are connected to vale0? Does netmap not support 4 igb interfaces to connect to vale0? But my final production environment is that two e1000e interfaces and four igb interfaces are all connected to vale0, so I am currently testing four igb interfaces first.

vmaffione commented 5 years ago

In theory your scenario is supported, but this type of setup (with more than one NIC attached to a VALE switch) was never really tested so far. My guess is that there are race conditions in the VALE-NIC code that may hit you.

Have you tried to check that your setup works when the traffic generator is generating very low rate, e.g. 100 packets per second, or even 1 packets per second? It would be a functional test.

You could generate packets with pkt-gen on a separate machine, connected back-to-back to one of the igb ports:

$ sudo pkt-gen -i ethX -f tx -R 10 # 10 pps

And then receive from another interface of the separate machine, connected back-to-back to another igb ports:

$ sudo pkt-gen -i ethY -f rx

and check that you receive everything as you should. Then you can try to increase the rate (-R) and see what happens. If you get stuck, it means there is a race condition. To reset the testbed you can access the test machine, detach all the interfaces from the VALE switch, and reattach them.

xiaojin2630 commented 5 years ago

Thank you for your test method, I tested it this way:

The netmap.ko version is d8866f1d07f66 The topology of the test is as follows:

 +----------------------------+
 | eth6   eth7   eth8   eth9  |  A: ip = 16.138
 +----------------------------+  vale0: eth6,eth7,eth8,eth9
    |          |      |      |
    |      |      |      |
    |      |      |      |
    |      |      |      |
    |      |      |      |
 +----------------------------+
 | eth0   eth1   eth2   eth3  |  B: ip = 14.188
 +----------------------------+

eth6 --- eth0
eth7 --- eth1
eth8 --- eth2
eth9 --- eth3 

Machine A

# ./vale-ctl -a vale0:eth6
# ./vale-ctl -a vale0:eth7
# ./vale-ctl -a vale0:eth8
# ./vale-ctl -a vale0:eth9

The test steps are as follows:

  1. Machine B sends 10pps

    # ./pkt-gen -f tx -i eth0 -R 10
  2. I read the message target mac is ff: ff: ff: ff: ff: ff   So eth1 eth2 eth3, can receive the message

      # ./pkt-gen -f rx -i eth1
      # ./pkt-gen -f rx -i eth2
      # ./pkt-gen -f rx -i eth3
  3. Execute dmesg and notice that the message has changed.

    030.218090 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth8 RX0
    031.218150 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth7 RX0
    032.218099 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth7 RX0
    033.218114 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth6 RX0
    034.218105 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth9 RX0
    035.218775 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth7 RX0
    036.218776 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth6 RX0
    037.218632 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth7 RX0
    038.218110 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth7 RX0
    039.218119 [1145] netmap_bwrap_intr_notify interrupt with no packets on eth7 RX0
  4. When I increase the packet rate pps to 600000, eth1, eth2, and eth3 can also receive messages, which are normal.

  5. But when I executed rx on A, I found a strange phenomenon.

    # ./pkt-gen -f rx -i vale0:eth7/z can receive messages, but pps is only about 30,000
    # 275.900807 main [1964]: interface is vale0:eth7/z
    # 275.900988 main [2085]: running on 1 cpus (have 2)
    # 275.901526 extract_ip_range [366]: range is 10.0.0.1:0 to 10.0.0.1:0
    # 275.901549 extract_ip_range [366]: range is 10.1.0.1:0 to 10.1.0.1:0
    # 275.901614 main [2164]: g.ifname = vale0:eth7/z
    # 275.901746 main [2187]: mapped 334980KB at 0xa2f34000
    # 275.901764 main [2189]: nmreq: slot: tx = 256, rx = 256; ring: tx = 1, rx = 1
    Receiving from vale0:eth7/z: 1 queues, 1 threads and 1 cpus.
    # 275.901851 main [2277]: Wait 2 secs for phy reset
    # 277.901968 main [2279]: Ready...
    # 277.902057 receiver_body [1448]: reading from vale0:eth7/z fd 3 main_fd 3
    # 278.903057 main_thread [1752]: 28981 pps (29.010 Kpkts 19.495 Mbps in 1001005 usec) 3.84 avg_batch
    # 279.904233 main_thread [1752]: 26616 pps (26.647 Kpkts 17.907 Mbps in 1001176 usec) 3.48 avg_batch
    # 280.905344 main_thread [1752]: 32196 pps (32.232 Kpkts 21.660 Mbps in 1001111 usec) 3.73 avg_batch
    # 281.906409 main_thread [1752]: 28744 pps (28.775 Kpkts 19.337 Mbps in 1001065 usec) 3.38 avg_batch
    # ./pkt-gen -f rx -i vale0:eth7/t can receive messages, but pps is only about 30,000
    # 221.154281 main [1964]: interface is vale0:eth7/t
    # 221.154533 main [2085]: running on 1 cpus (have 2)
    # 221.155613 extract_ip_range [366]: range is 10.0.0.1:0 to 10.0.0.1:0
    # 221.155695 extract_ip_range [366]: range is 10.1.0.1:0 to 10.1.0.1:0
    # 221.155716 main [2164]: g.ifname = vale0:eth7/t
    # 221.157074 main [2187]: mapped 8280KB at 0xb6ea8000
    # 221.157109 main [2189]: nmreq: slot: tx = 256, rx = 256; ring: tx = 1, rx = 1
    Receiving from vale0:eth7/t: 1 queues, 1 threads and 1 cpus.
    # 221.157141 main [2277]: Wait 2 secs for phy reset
    # 223.157229 main [2279]: Ready...
    # 223.157364 receiver_body [1448]: reading from vale0:eth7/t fd 3 main_fd 3
    # 224.158351 main_thread [1752]: 27146 pps (27.173 Kpkts 18.260 Mbps in 1001006 usec) 3.26 avg_batch
    # 225.159390 main_thread [1752]: 27941 pps (27.970 Kpkts 18.796 Mbps in 1001040 usec) 3.40 avg_batch
    # 226.160594 main_thread [1752]: 27513 pps (27.546 Kpkts 18.511 Mbps in 1001203 usec) 3.55 avg_batch
    # 227.161415 main_thread [1752]: 25987 pps (26.008 Kpkts 17.477 Mbps in 1000822 usec) 3.32 avg_batch
    # ./pkt-gen -f rx -i vale0:eth7/r can receive messages, pps is about 400,000 or so
    # 178.338375 main [1964]: interface is vale0:eth7/r
    # 178.338728 main [2085]: running on 1 cpus (have 2)
    # 178.339379 extract_ip_range [366]: range is 10.0.0.1:0 to 10.0.0.1:0
    # 178.339400 extract_ip_range [366]: range is 10.1.0.1:0 to 10.1.0.1:0
    # 178.339481 main [2164]: g.ifname = vale0:eth7/r
    # 178.341258 main [2187]: mapped 8280KB at 0xb6e9b000
    # 178.341280 main [2189]: nmreq: slot: tx = 256, rx = 256; ring: tx = 1, rx = 1
    Receiving from vale0:eth7/r: 1 queues, 1 threads and 1 cpus.
    # 178.341300 main [2277]: Wait 2 secs for phy reset
    # 180.341849 main [2279]: Ready...
    # 180.341946 receiver_body [1448]: reading from vale0:eth7/r fd 3 main_fd 3
    # 181.342946 main_thread [1752]: 412745 pps (413.160 Kpkts 277.644 Mbps in 1001005 usec) 27.36 avg_batch
    # 182.344003 main_thread [1752]: 436205 pps (436.667 Kpkts 293.440 Mbps in 1001058 usec) 27.76 avg_batch
    # 183.344755 main_thread [1752]: 401997 pps (402.299 Kpkts 270.345 Mbps in 1000751 usec) 24.60 avg_batch

    It is reasonable to assume that these three pps should be about the same size.

giuseppelettieri commented 5 years ago

Please do not monitor valeX:ethY: when it works, it is by pure chance. The valeX:ethY names are internal ones and should not be visible outside, but they cannot be hidden for now.

On the A server, you should monitor the ethY's directly, e.g.:

pkt-gen -i eth7/t

or

pkt-gen -i eth7/z
xiaojin2630 commented 5 years ago

thanks for your reminder. @giuseppelettieri

xiaojin2630 commented 5 years ago

Thank you for keeping track of this issue and helping me answer questions, thank you very much. Since the netmap netmap_bwrap_intr_notify has been modified in the new code, I will close the issue first, and there will be other issues, I will submit another issue. @vmaffione