aiellc2 / flow-tools

Automatically exported from code.google.com/p/flow-tools
Other
0 stars 0 forks source link

Performance issue on flow-capture #25

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Enable flow on a large production router having more than 10 Links with 50 - 
100MBps terminated for remote location.
2. You would get flow files around 15 - 20 MB per minute.
3. CPU utilization is high when we start the flow-capture.

What is the expected output? What do you see instead?
I need to know how much flow rate was supported by flow-tools, and what is the 
recommended server hardware platform to capture flow for nearly 50 routers on a 
single server.

What version of the product are you using? On what operating system?
I am using flow-tools-0.68.5.1-1.el5 on Redhat enterprise linux 5 64 bit

Please provide any additional information below.
At present i am collecting flow data for one high utilization router with 10 
Links on it, for this alone my server's load average is going to 3 - 5, i would 
like to added more than 50 routers are the same server, can any one help me to 
fine tune / suggest the hardware to handle the load..

My Server configuration : 
CPU - 16 x 3.00GHz 
MEMORY - 16 GB
HDD - 440 GB In RAID5 Array

Thanks in Advance,
Ram

Original issue reported on code.google.com by ramkuma...@gmail.com on 10 May 2013 at 4:37

GoogleCodeExporter commented 8 years ago
This is not a 'defect,' and should probably have been posted to the mailing 
list.  However, your question is "How do I know I am getting all of the data?"  
The following is taken directly from the SECURITY page on www.splintered.net"

"Loss of flow exports is usually a result of resource exhaustion on the
router, link to the flow collector, or the flow collector itself.  "show
ip flow export" on the router will list some sources of lost flows.  Check
output drops on the interface directly connected to the flow collector.
On 7500's the interface command "transmit-buffers backing-store"
can reduce output drops.  Use netstat -s on the flow collector to display
UDP packets dropped due to full socket buffers.  This is usually an indication
of an overworked server."

I successfully measured losses on an enterprise data collector by building a 
second collector that was used to measure only one flow export.  Then, choosing 
a (sequence of) flow(s) that traversed two measurement points, I then 
filtered/compared the traffic from the single-flow collector to the volume on 
the aggregate (enterprise) collector, and was able to approximate lost flows.  
I backed off feeds to the aggregate collector until the two reports were fairly 
close in agreement.

Original comment by Seajay.T...@gmail.com on 28 Oct 2013 at 8:22