pevma / SEPTun-Mark-II

Suricata Extreme Performance Tuning guide - Mark II
GNU General Public License v2.0
113 stars 17 forks source link

High number kernel_drops #4

Open sanpichen opened 3 years ago

sanpichen commented 3 years ago

version:“5.0.2-dev (b9515671b 2019-12-13)” run as system service my suricata drops lot of packets when i increase stream.reassembly.memcap.

… stream: memcap: 8gb checksum-validation: yes # reject incorrect csums inline: auto # auto will use inline mode in IPS mode, yes or no set it statically reassembly: memcap: 10gb depth: 1mb # reassemble 1mb into a stream toserver-chunk-size: 2560 toclient-chunk-size: 2560 randomize-chunk-size: yes …

Date: 12/29/2020 – 12:01:38 (uptime: 0d, 00h 55m 59s) Counter | TM Name | Value capture.kernel_packets | Total | 112002445 capture.kernel_drops | Total | 37214768 decoder.pkts | Total | 74498141 decoder.bytes | Total | 48325625554 decoder.invalid | Total | 26 decoder.ipv4 | Total | 74177718 decoder.ipv6 | Total | 17697 decoder.ethernet | Total | 74498141 decoder.tcp | Total | 70136275 decoder.udp | Total | 3226298 decoder.icmpv4 | Total | 499878 decoder.icmpv6 | Total | 778 decoder.vlan | Total | 51202537 decoder.avg_pkt_size | Total | 648 decoder.max_pkt_size | Total | 65040 flow.tcp | Total | 1343887 flow.udp | Total | 683712 flow.icmpv4 | Total | 21412 flow.icmpv6 | Total | 324 decoder.event.ipv4.iplen_smaller_than_hlen | Total | 25 decoder.event.ipv4.opt_pad_required | Total | 542 decoder.event.icmpv4.unknown_type | Total | 7 decoder.event.icmpv4.unknown_code | Total | 44 decoder.event.ipv6.zero_len_padn | Total | 299 decoder.event.tcp.invalid_optlen | Total | 1 tcp.sessions | Total | 1083081 tcp.pseudo | Total | 8 tcp.invalid_checksum | Total | 8 tcp.syn | Total | 1590415 tcp.synack | Total | 1276354 tcp.rst | Total | 862490 tcp.pkt_on_wrong_thread | Total | 1399404 tcp.stream_depth_reached | Total | 3120 tcp.reassembly_gap | Total | 70854 tcp.overlap | Total | 12163622 detect.alert | Total | 3461 app_layer.flow.http | Total | 354063 app_layer.tx.http | Total | 541925 app_layer.flow.ftp | Total | 460 app_layer.tx.ftp | Total | 3940 app_layer.flow.smtp | Total | 6 app_layer.tx.smtp | Total | 8 app_layer.flow.tls | Total | 204234 app_layer.flow.ssh | Total | 1112 app_layer.flow.smb | Total | 1 app_layer.tx.smb | Total | 3 app_layer.flow.dcerpc_tcp | Total | 6 app_layer.flow.dns_tcp | Total | 7 app_layer.tx.dns_tcp | Total | 14 app_layer.flow.ntp | Total | 12065 app_layer.tx.ntp | Total | 13751 app_layer.flow.ftp-data | Total | 220 app_layer.flow.dhcp | Total | 183 app_layer.tx.dhcp | Total | 1757 app_layer.flow.snmp | Total | 14087 app_layer.tx.snmp | Total | 111594 app_layer.flow.failed_tcp | Total | 51482 app_layer.flow.dcerpc_udp | Total | 1453 app_layer.flow.dns_udp | Total | 374326 app_layer.tx.dns_udp | Total | 1299524 app_layer.flow.failed_udp | Total | 281598 flow_mgr.closed_pruned | Total | 757349 flow_mgr.new_pruned | Total | 1061741 flow_mgr.est_pruned | Total | 184094 flow.spare | Total | 10539 flow.tcp_reuse | Total | 920 flow_mgr.flows_checked | Total | 5980 flow_mgr.flows_notimeout | Total | 4748 flow_mgr.flows_timeout | Total | 1232 flow_mgr.flows_timeout_inuse | Total | 216 flow_mgr.flows_removed | Total | 1016 flow_mgr.rows_checked | Total | 65536 flow_mgr.rows_skipped | Total | 61659 flow_mgr.rows_empty | Total | 366 flow_mgr.rows_maxlen | Total | 6 tcp.memuse | Total | 78000560 tcp.reassembly_memuse | Total | 223701512 http.memuse | Total | 128032703 ftp.memuse | Total | 429369 app_layer.expectations | Total | 11 flow.memuse | Total | 24748096

if stream.reassembly.memcap scales to 256m ,the capture.kernel_drops would down even to zero. the TCP reassembly gaps increases linely

top - 12:04:11 up 20 days, 1:48, 3 users, load average: 5.49, 5.30, 5.11 Tasks: 525 total, 1 running, 524 sleeping, 0 stopped, 0 zombie %Cpu(s): 8.1 us, 0.6 sy, 1.9 ni, 88.8 id, 0.4 wa, 0.0 hi, 0.2 si, 0.0 st KiB Mem : 13166155+total, 17006872 free, 96304864 used, 18349816 buff/cache KiB Swap: 0 total, 0 free, 0 used. 33722224 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 59972 elastic+ 20 0 0.810t 0.028t 1.580g S 259.6 22.5 27468:12 java 240739 logstash 20 0 66.586g 0.057t 0.055t S 143.4 46.9 84:01.15 Suricata-Main 105217 logstash 39 19 12.291g 4.592g 29680 S 101.0 3.7 5866:38 java 262217 telegraf 20 0 2574540 29828 10004 S 10.3 0.0 235:20.14 telegraf 246344 root 20 0 47372 4328 3152 R 1.3 0.0 0:00.12 top 2813 mongodb 20 0 1121464 46796 0 S 0.7 0.0 193:12.16 mongod 1568 root 20 0 0 0 0 S 0.3 0.0 54:08.02 jbd2/dm-1-8 1 root 20 0 204808 3044 1244 S 0.0 0.0 0:18.00 systemd

NIC setting:

interface: eno3 threads: auto cluster-id: 99 cluster-type: cluster_flow defrag: yes use-mmap: yes mmap-locked: yes tpacket-v3: yes ring-size: 200000 block-size: 1048576 interface: eno4 threads: 48 cluster-id: 100 cluster-type: cluster_flow defrag: yes use-mmap: yes mmap-locked: yes tpacket-v3: yes ring-size: 200000 block-size: 1048576 ifconfig eno4 eno4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::3a68:ddff:fe1c:422b prefixlen 64 scopeid 0x20 ether 38:68:dd:1c:42:2b txqueuelen 4000 (Ethernet) RX packets 19760215198 bytes 11373818908711 (10.3 TiB) RX errors 0 dropped 17724754 overruns 0 frame 216 TX packets 286 bytes 20256 (19.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ethtool -l eno4 Channel parameters for eno4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 128 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 1

memcap-list Success: [ { “name”: “stream”, “value”: “8gb” }, { “name”: “stream-reassembly”, “value”: “10gb” }, { “name”: “flow”, “value”: “1gb” }, { “name”: “applayer-proto-http”, “value”: “3gb” }, { “name”: “defrag”, “value”: “256mb” }, { “name”: “ippair”, “value”: “16mb” }, { “name”: “host”, “value”: “2gb” } ]

by the way ,tcp.memuse always below 80M ,how can i increase the memcap?

pevma commented 3 years ago

Again - can you please post requests for help to the Suricata forum like the previous one you posted :)