Open GoogleCodeExporter opened 9 years ago
PF_RING server load for comparison. But I sure netmap better then PF_RING many
times and I do something wrong :)
Original comment by pavel.odintsov
on 29 Oct 2014 at 1:45
Attachments:
I can provide perf top output too:
8.23% [ixgbe] [k] ixgbe_netmap_rxsync
7.97% [ixgbe] [k] ixgbe_poll
7.88% [kernel] [k] irq_entries_start
4.15% [kernel] [k] do_gettimeofday
4.12% [netmap_lin] [k] netmap_poll
4.00% [kernel] [k] add_interrupt_randomness
3.03% [netmap_lin] [k] nm_kr_put
2.32% [netmap_lin] [k] nm_kr_tryget
2.27% [kernel] [k] native_read_tsc
1.88% [kernel] [k] _raw_spin_lock_irqsave
1.25% [kernel] [k] arch_local_irq_restore
1.11% [kernel] [k] __schedule
1.09% [kernel] [k] __netif_receive_skb
1.07% [kernel] [k] timekeeping_get_ns
1.05% [kernel] [k] arch_local_irq_save
1.04% [kernel] [k] _raw_spin_unlock_irqrestore
0.99% [netmap_lin] [k] nm_rxsync_prologue
0.98% [kernel] [k] do_raw_spin_lock
0.87% [kernel] [k] __alloc_skb
0.69% [kernel] [k] arch_local_irq_enable
0.69% [kernel] [k] arch_local_irq_restore
0.63% [kernel] [k] __wake_up
0.62% [kernel] [k] idle_cpu
0.62% [kernel] [k] getnstimeofday
0.59% [kernel] [k] select_estimate_accuracy
0.59% [kernel] [k] __cache_free.isra.41
0.57% [kernel] [k] handle_edge_irq
0.56% [ixgbe] [k] test_and_set_bit
0.56% [kernel] [k] do_sys_poll
0.52% [kernel] [k] __do_softirq
0.52% [kernel] [k] handle_irq_event_percpu
0.51% [kernel] [k] do_IRQ
0.51% [kernel] [k] atomic_inc
0.51% [kernel] [k] rcu_exit_nohz
0.51% [kernel] [k] net_rx_action
0.50% [kernel] [k] timerqueue_add
0.50% [kernel] [k] dev_gro_receive
0.50% [kernel] [k] tick_nohz_stop_sched_tick
After some NIC tuning kernels load become smaller but still big for 200kpps:
Original comment by pavel.odintsov
on 29 Oct 2014 at 10:35
Attachments:
Sorry for wrong information.
In 8 thread mode:
../examples//pkt-gen -i eth3 -f rx -p 8
All my cpus loaded like this:
https://netmap.googlecode.com/issues/attachment?aid=300000000&name=netmap_overlo
ad.png&token=ABZ6GAdSCcuQyFqynZp9JRBluLBPzySfUw%3A1414578954266&inline=1
Original comment by pavel.odintsov
on 29 Oct 2014 at 10:39
Hi Pavel,
we tried to investigate this problem. Maybe the cause is the Interrupt Throttle
Rate (ITR).
Can you try to set a big value for the rx-side?
sudo ethtool -C eth3 rx-usecs 1000
Cheers,
Stefano
Original comment by stefanog...@gmail.com
on 31 Oct 2014 at 12:49
Original comment by stefanog...@gmail.com
on 31 Oct 2014 at 2:19
Hello, folks!
I tried with FreeBSD 10 and hit same issue.
My configuration:
VirtualBox, VM with 4 CPUs i7 3635QM, 2.4Ghz
Intel PRO1000/MT Server, bridge mode
I wrote this code for tests:
https://github.com/FastVPSEestiOu/fastnetmon/blob/master/tests/netmap.cpp
Then I start flooding my server with hping3: sudo hping3 --flood --udp
192.168.0.2
hping3 generated about 20 000 packets per server but this load complately
overload my server with netmap.
Average server load on system was about 30%. It's really really slow even for
pcap :(
Original comment by pavel.odintsov
on 12 Feb 2015 at 3:27
Attachments:
Any news? :(
Original comment by pavel.odintsov
on 1 Mar 2015 at 7:42
Looks like this issue related with my hand made broken patch of Sourceforge's
Intel ixgbe driver.
Original comment by pavel.odintsov
on 26 Mar 2015 at 8:30
Original issue reported on code.google.com by
pavel.odintsov
on 29 Oct 2014 at 1:35Attachments: