Open Jackson-Y opened 8 years ago
Hi. I’m not really sure what problem you are encountering. Perhaps your computer doesn’t have enough memory. My suggestion is that you try to run it with a max of 1024 or even 512 possible open files and see if that helps.
On Oct 13, 2016, at 1:29 AM, Jackson-Y notifications@github.com wrote:
Hi, simsong! Recently, We use the tcpflow found that sometimes the data is not completely recorded, but the tcpdump can. We found "too many open files" in the tcpflow debug information. Because fds to be opened is more than the system default max_fds(1024). However, when we modify it by "ulimit -n 4096", the data is still not fully recorded. We found the tcpflow opened about 2000-3000 files, but this time there is no "too many open files" appears. Is it "too many open files" that caused the data lost? How can I improve it?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/simsong/tcpflow/issues/134, or mute the thread https://github.com/notifications/unsubscribe-auth/ABhTrHQVY41N7i94b1IvNFRZu1THGhhwks5qzcHFgaJpZM4KVgCW.
My computer is a test server with 32G memory, 8 CPU and BCM5709 Gigabit Ethernet. Its traffic comes from a net Gateway with about 400Mb/s speed. I think its memory is enough. In function process_ipv4(), I write the captured data into debug file, found the data is already lost. Does opening too many files impact the libpcap?
Hm. You are doing live capture? My suggestion is to capture to a file. It may be that your hard drive can’t keep up with your packet flow, which is why you are losing data. Capture to a file, then run the program under a debugger and set a breakpoint where the error messages are being printed.
On Oct 14, 2016, at 2:40 AM, Jackson-Y notifications@github.com wrote:
My computer is a test server with 32G memory and 8 CPU. Its traffic comes from a net Gateway with about 400Mb/s speed.I think its memory is enough. In function process_ipv4(), I write the captured data into debug file, found the data is already lost. Does opening too many files impact the libpcap?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/simsong/tcpflow/issues/134#issuecomment-253721644, or mute the thread https://github.com/notifications/unsubscribe-auth/ABhTrE_U5SUar2Bjt90zVBK7nkWgodYXks5qzyPNgaJpZM4KVgCW.
Hi, simsong! Recently, We use the tcpflow found that sometimes the data is not completely recorded, but the tcpdump can. We found "too many open files" in the tcpflow debug information. Because fds to be opened is more than the system default max_fds(1024). However, when we modify it by "ulimit -n 4096", the data is still not fully recorded. We found the tcpflow opened about 2000-3000 files, but this time there is no "too many open files" appears. Is it "too many open files" that caused the data lost? How can I improve it?