Closed pauldurrant closed 8 years ago
Recent drivers have almost the same performance of our deprecated pf_ring-aware drivers, that's why we removed them. Please use standard drivers.
my network interface drive is tg3,so now i do not need to load any driver,just use the tg3 driver?
Correct, just use standard linux drivers.
Alfredo
On 10 Jan 2017, at 01:58, leveryd notifications@github.com wrote:
my network interface drive is tg3,so now i do not need to load any driver,just use the tg3 driver?
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ntop/PF_RING/issues/110#issuecomment-271533562, or mute the thread https://github.com/notifications/unsubscribe-auth/AHMvJwgmewSN0E-augtKMntOoA0Fbhysks5rQ1Y1gaJpZM4JH3Xm.
i send 500 http request using python script,and I use "PF_RING/userland/tcpdump/tcpdump -i em3 -w test.pcap",then i search my http request string in test.pcap file,i can just find about 400 request.So that means i loss 100 http request. by the way , the em3 network interface speed is 200M/s which i think pf_ring can handle perfect. when i run tcpdump,i run "dmesg" and find no "debug message". if i want to check if pf_ring is working,is there some message print in kernel? or in other ways,how to debug pf_ring?
You can check that tcpdump is using pf_ring by listing the socket files under /proc/net/pf_ring/ (you should see a file containing the tcpdump pid in the name). You should also be able to see packet loss in that file, unless it occurs at card level (use ethtool -S ethX for that)
Alfredo
On 10 Jan 2017, at 23:49, leveryd notifications@github.com wrote:
i send 500 http request using python script,and I use "PF_RING/userland/tcpdump/tcpdump -i em3 -w test.pcap",then i search my http request string in test.pcap file,i can just find about 400 request.So that means i loss 100 http request. by the way , the em3 network interface speed is 200M/s which i think pf_ring can handle perfect. when i run tcpdump,i run "dmesg" and find no "debug message". if i want to check if pf_ring is working,is there some message print in kernel? or in other ways,how to debug pf_ring?
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ntop/PF_RING/issues/110#issuecomment-271800934, or mute the thread https://github.com/notifications/unsubscribe-auth/AHMvJ90YhESzX8vDDF59381DMDX30pUKks5rRImWgaJpZM4JH3Xm.
thanks for your replay. i think i got it,so i really find packet loss at card level(use ethtool -S em3,it shows rx rx_discards),then i increase em3 buffer size(use ethtool -G em3 rx 2000),then the options "rx_discards" value stay the same all the time.so no more packet loss at card level. the socket file under /proc/net/pf_ring/ shows "Tot Pkt Lost" = 0. all those tell me no loss packet,but the fact is that i still loss 20% packet as above. pf_ring does not run on multi processor by default?I see only one cpu working.(use "top" command).
If both "ethtool -S
Alfredo
On 11 Jan 2017, at 07:47, leveryd notifications@github.com wrote: thanks for your replay. i think i got it,so i really find packet loss at card level(use ethtool -S em3,it shows rx rx_discards),then i increase em3 buffer size(use ethtool -G em3 rx 2000),then the options "rx_discards" value stay the same all the time.so no more packet loss at card level. the socket file under /proc/net/pf_ring/ shows "Tot Pkt Lost" = 0. all those tell me no loss packet,but the fact is that i still loss 20% packet as above. pf_ring does not run on multi processor by default?I see only one cpu working.(use "top" command).
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ntop/PF_RING/issues/110#issuecomment-271903927, or mute the thread https://github.com/notifications/unsubscribe-auth/AHMvJ3mWO_rPetOkqcV2G0qCSPn1BwCDks5rRPl3gaJpZM4JH3Xm.
oh,i mirror all the packet from a nginx server.i am sure packets are htting the "nginx server" nic,because i see those requests from nginx access.log. pf_ring does not run on multi processor by default?I see only one cpu working.(use "top" command). i google ,search that the "bro" ids can be configured to run on multi processor,so is there some options i can adjust on pf_ring to run on multi processor?
On 11 Jan 2017, at 08:16, leveryd notifications@github.com wrote:
oh,i mirror all the packet from a nginx server.i am sure packets are htting the "nginx server" nic,because i see those requests from nginx access.log.
I see, I do not understand where packets get lost then. Did you try running pfcount and check if you experience the same issue? pf_ring does not run on multi processor by default?I see only one cpu working.(use "top" command). i google ,search that the "bro" ids can be configured to run on multi processor,so is there some options i can adjust on pf_ring to run on multi processor?
PF_RING does support load balancing to multiple cores using several technologies, in your case you should use kernel clustering. Please read: https://www.bro.org/documentation/load-balancing.html https://www.bro.org/documentation/load-balancing.html Please ignore the DNA section which has been replaced by ZC for zero-copy distribution using RSS or zbalance (see https://github.com/ntop/PF_RING/blob/dev/doc/README.bro.md https://github.com/ntop/PF_RING/blob/dev/doc/README.bro.md).
Alfredo
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ntop/PF_RING/issues/110#issuecomment-271912880, or mute the thread https://github.com/notifications/unsubscribe-auth/AHMvJz_EqxjMO78eZV6z2SKOajkF1o5nks5rRQBPgaJpZM4JH3Xm.
Back in October 2014, Broadcom tg3 PF_RING aware drivers were added to the svn repository (see http://www.gossamer-threads.com/lists/ntop/misc/36337 ).
These don't seem to have made it into the 6.0.3 github repository, and the svn repository is no longer available.
I'm currently using PF_RING 6.0.2 with those drivers on some 14.04 and older Ubuntu installations. It may be that we'll want to move to Ubuntu 16.04 and the latest PF_RING on that same hardware at some point. Were the drivers dropped because recent (included with Ubuntu) versions of the driver were already PF_RING aware? Or do I need to dig out the old source and see if I can get it to compile on later Ubuntu?