Closed llyyrr closed 4 years ago
from your suricata.log
11/2/2020 -- 10:01:04 - - [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Invalid rule-files configuration section: expected a list of filenames.
11/2/2020 -- 10:01:04 - - No signatures supplied.
hence no rules in ACl
11/2/2020 -- 10:01:04 - - ----- ACL IPV4 DUMP (0) ----
11/2/2020 -- 10:01:04 - - ----- ACL IPV6 DUMP (0) ----
therefore in IPS mode, it is currently working as the bypass. There is nothing wrong in the behaviour. Hence invalid and won't fix.
But as soon to enable the rules, traffic is drop and packets stop arriving:
sudo ./src/suricata -vvv -c /etc/suricata/suricata.yaml -S /etc/suricata/rules/test.rules --dpdkintel
11/2/2020 -- 10:23:38 -
11/2/2020 -- 10:23:39 -
cat /etc/suricata/rules/test.rules: alert ip $HOME_NET any -> any any (msg:"drop ip rules"; reference:url,antizapret.info; classtype:web-application-attack; sid:19; rev:1;)
11/2/2020 -- 10:24:29 - - --- thread stats for Intf: 0 to 1 ---
11/2/2020 -- 10:24:29 - - +++ ACL +++
11/2/2020 -- 10:24:29 - - - non IP 1
11/2/2020 -- 10:24:29 - - +++ ipv4 5182 +++
11/2/2020 -- 10:24:29 - - - lookup: success 5182, fail 0
11/2/2020 -- 10:24:29 - - - result: hit 0, miss 5182
11/2/2020 -- 10:24:29 - - +++ ipv6 0 +++
11/2/2020 -- 10:24:29 - - - lookup: success 0, fail 0
11/2/2020 -- 10:24:29 - - - result: hit 0, miss 0
11/2/2020 -- 10:24:29 - - +++ ring +++
11/2/2020 -- 10:24:29 - - ERR: full 0, enq 0, tx 0
11/2/2020 -- 10:24:29 - - +++ port 0 +++
11/2/2020 -- 10:24:29 - - - index 0 pkts RX 5183 TX 5183 MISS 0
11/2/2020 -- 10:24:29 - - - Errors RX: 0 TX: 0 Mbuff: 0
11/2/2020 -- 10:24:29 - - - Queue Dropped pkts: 0
11/2/2020 -- 10:24:29 - - ----------------------------------
11/2/2020 -- 10:24:29 - - Stream TCP processed 0 TCP packets
11/2/2020 -- 10:24:29 - - Fast log output wrote 1 alerts
11/2/2020 -- 10:24:29 - - HTTP logger logged 0 requests
11/2/2020 -- 10:24:29 - - (RxDPDKINTEL11) Packets 2662, bytes 356634
11/2/2020 -- 10:24:29 - - --- thread stats for Intf: 1 to 0 ---
11/2/2020 -- 10:24:29 - - +++ ACL +++
11/2/2020 -- 10:24:29 - - - non IP 1
11/2/2020 -- 10:24:29 - - +++ ipv4 28868797 +++
11/2/2020 -- 10:24:29 - - - lookup: success 28868797, fail 0
11/2/2020 -- 10:24:29 - - - result: hit 0, miss 28868797
11/2/2020 -- 10:24:29 - - +++ ipv6 0 +++
11/2/2020 -- 10:24:29 - - - lookup: success 0, fail 0
11/2/2020 -- 10:24:29 - - - result: hit 0, miss 0
11/2/2020 -- 10:24:29 - - +++ ring +++
11/2/2020 -- 10:24:29 - - ERR: full 0, enq 0, tx 0
11/2/2020 -- 10:24:29 - - +++ port 1 +++
11/2/2020 -- 10:24:29 - - - index 1 pkts RX 28868798 TX 10357 MISS 0
11/2/2020 -- 10:24:29 - - - Errors RX: 0 TX: 0 Mbuff: 0
11/2/2020 -- 10:24:29 - - - Queue Dropped pkts: 0
11/2/2020 -- 10:24:29 - - ----------------------------------
What does these logs tell you? why do you expect if there no DPDK ACL match, the packet has to be sent to Suricata worker?
If your claim was in 10G only port 0 does rx packet, does acl and tx packets on port 1. But not port 1, I can understand. (As I have not added the port 1 RX on 10G)
But if your claim is there is packets send with ACL match the logs does not say so.
From these logs I can see that packets that did not fall under the rules were discarded, because ideally, Tx&Rx should be equal on both ports. Like in the first example.
I use suricata in the afpacket mode. There this test successful. I ask for help in solving this problem. The DPDK driver will help increase performance
From these logs I can see that packets that did not fall under the rules were discarded
11/2/2020 -- 10:24:29 - - +++ ACL +++
11/2/2020 -- 10:24:29 - - - non IP 1
11/2/2020 -- 10:24:29 - - +++ ipv4 5182 +++
11/2/2020 -- 10:24:29 - - - lookup: success 5182, fail 0
11/2/2020 -- 10:24:29 - - - result: hit 0, miss 5182
the packets send did not hit the rule it is a miss
. If there is no hit, I do not send it to the Suricata worker for analysis.
Tx Rx should be equal on both ports. Like in the first example. [vv] not true,
ask for help in solving this problem. [vv] which I have been patiently sharing with you. But you are not helping me with the right information.
As I understand it, rules with alert actions should not drop packets.
I am not dropping any packets if you can show me where I am dropping I can help you.
I do not support AF_PACKET suricata, hence any packet on AF_PACKET is not passed through DPDK pipeline.
I am not dropping any packets if you can show me where I am dropping I can help you.
Unfortunately, I do not know this. Сan only conclude from statistics, which should be the same on all interfaces, as in the first case.
I do not support AF_PACKET suricata,
For this project with dpdk I use a physical dedicated server. The surecata with AF_PACKET is another project, it is currently working.
Here the fundamental gaps in your understanding.
In first run log which is been claimed to be run by af-packet is incorrect.
In the second logs where you have added rules, there no packet hitting ACL. hence no pkts are forwarded to suricata worker.
Why there are differences in pkt count is environment difference.
If there is no hit, I do not send it to the Suricata worker for analysis.
this mode of operation is similar to the IDS.
Environmental gaps
I have asked multiple times to share ssh and skype to debug your problem. I hope you will share soon to narrow down the problem with your environment.
In
If there is no hit, I do not send it to the Suricata worker for analysis.
this mode of operation is similar to the IDS.
Provide context
Here the fundamental gaps in your understanding.
- In first run log which is been claimed to be run by af-packet is incorrect.
- In the second logs where you have added rules, there no packet hitting ACL. hence no pkts are forwarded to suricata worker.
- Why there are differences in pkt count is environment difference.
You probably misunderstood me. AFPACKET I use in another case, not with DPDK_suricata3.0. I wanted to say that there the actions "pass", "drop", "replace" and "alert" are processed by the worker.
Here the fundamental gaps in your understanding.
- In first run log which is been claimed to be run by af-packet is incorrect.
- In the second logs where you have added rules, there no packet hitting ACL. hence no pkts are forwarded to suricata worker.
- Why there are differences in pkt count is environment difference.
You probably misunderstood me. AFPACKET I use in another case, not with DPDK_suricata3.0. I wanted to say that there the actions "pass", "drop", "replace" and "alert" are processed by the worker.
Share skype and ssh if you want to me to understand your gaps or issues
Here the fundamental gaps in your understanding.
- In first run log which is been claimed to be run by af-packet is incorrect.
- In the second logs where you have added rules, there no packet hitting ACL. hence no pkts are forwarded to suricata worker.
- Why there are differences in pkt count is environment difference.
You probably misunderstood me. AFPACKET I use in another case, not with DPDK_suricata3.0. I wanted to say that there the actions "pass", "drop", "replace" and "alert" are processed by the worker.
Good, but from your earlier description, this is not clear. here are fundamental questions
If not, set these and run the results.
I am not dropping any packets if you can show me where I am dropping I can help you.
Unfortunately, I do not know this. Сan only conclude from statistics, which should be the same on all interfaces, as in the first case.
A fundamental gap in understanding and incorrect environment settings you will run into the same.
I have been waiting for you so far you have not shared ssh or skype for your environment. If you want my help you have to share
@llyyrr are you sharing skype and shh?
@llyyrr are you sharing skype and shh skype: lyyr@bk.ru
Finally got skype and screen to work, following were the observations
Hence the environment changes require to correct
We have spent a total of 4 hours productively.
OS: Ubuntu VERSION="18.04.4 LTS (Bionic Beaver)"
GCC gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
DPDK Version 18.11.5
DPDK Target: x86_64-native-linuxapp-gcc
PCIe Information: [ 1.316661] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.10.19.30 [ 1.316914] i40e: Copyright(c) 2013 - 2019 Intel Corporation. [ 1.330278] i40e 0000:02:00.0: fw 5.0.40043 api 1.5 nvm 5.04 0x800024cb 0.0.0 [ 1.573301] i40e 0000:02:00.0: MAC address: 00:e0:ed:75:9c:f0 [ 1.573700] i40e 0000:02:00.0: FW LLDP is enabled [ 1.603092] i40e 0000:02:00.0 eth2: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None [ 1.653211] i40e 0000:02:00.0: PCI-Express: Speed 8.0GT/s Width x4 [ 1.653212] i40e 0000:02:00.0: PCI-Express bandwidth available for this device may be insufficient for optimal performance. [ 1.653212] i40e 0000:02:00.0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate. [ 1.679041] i40e 0000:02:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 8 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA [ 1.692498] i40e 0000:02:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x800024cb 0.0.0 [ 1.986150] i40e 0000:02:00.1: MAC address: 00:e0:ed:75:9c:f1 [ 1.986822] i40e 0000:02:00.1: FW LLDP is enabled [ 2.016186] i40e 0000:02:00.1 eth0: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None [ 2.066278] i40e 0000:02:00.1: PCI-Express: Speed 8.0GT/s Width x4 [ 2.066680] i40e 0000:02:00.1: PCI-Express bandwidth available for this device may be insufficient for optimal performance. [ 2.067091] i40e 0000:02:00.1: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate. [ 2.093307] i40e 0000:02:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 8 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA
ethtool -i enp2s0f0 driver: i40e version: 2.10.19.30 firmware-version: 5.04 0x800024cb 0.0.0 expansion-rom-version: bus-info: 0000:02:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes
ethtool -i enp2s0f1 driver: i40e version: 2.10.19.30 firmware-version: 5.04 0x800024cb 0.0.0 expansion-rom-version: bus-info: 0000:02:00.1 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes
./dpdk-devbind.py -b igb_uio 02:00.1 02:00.0
./dpdk-devbind.py -s: Network devices using DPDK-compatible driver
0000:02:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e 0000:02:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
in suricata.yaml:
dpdkintel support
dpdkintel:
inputs:
interface: 1 copy-interface: 0 opmode: ips
11/2/2020 -- 09:58:51 - - DPDK ipv4AclCtx: 0x2200234400 done!
11/2/2020 -- 09:58:51 - - DPDK ipv6AclCtx: 0x2200236b80 done!
--- DPDK Intel Ports ---
Overall Ports: 2
-- Port: 0 --- MTU: 1500 --- MAX RX MTU: 9728 --- Driver: net_i40e --- Index: 0 --- Queues RX 320 & TX 320 --- SRIOV VF: 0 --- Offload RX: 12e6f TX: 19fbf --- CPU NUMA node: 0 --- Status: Up Led for 5 sec...
-- Port: 1 --- MTU: 1500 --- MAX RX MTU: 9728 --- Driver: net_i40e --- Index: 0 --- Queues RX 320 & TX 320 --- SRIOV VF: 0 --- Offload RX: 12e6f TX: 19fbf --- CPU NUMA node: 0 --- Status: Up Led for 5 sec...
==========================
./src/suricata -vvv -c /etc/suricata/suricata.yaml --dpdkintel 11/2/2020 -- 10:01:03 - - section (EAL) has entries 6
11/2/2020 -- 10:01:03 - - - name: (--file-prefix) value: (suricata_1)
11/2/2020 -- 10:01:03 - - - name: (-c) value: (0xf0)
11/2/2020 -- 10:01:03 - - - name: (--master-lcore) value: (7)
11/2/2020 -- 10:01:03 - - - name: (--log-level) value: (eal,1)
11/2/2020 -- 10:01:03 - - - name: (-w) value: (0000:02:00.0)
11/2/2020 -- 10:01:03 - - - name: (-w) value: (0000:02:00.1)
11/2/2020 -- 10:01:04 - - DPDK ACL setup
11/2/2020 -- 10:01:04 - - DPDK ipv4AclCtx: 0x2200234400 done!
11/2/2020 -- 10:01:04 - - DPDK ipv6AclCtx: 0x2200236b80 done!
Warning: Invalid/No global_log_level assigned by user. Falling back on the default_log_level "Info"
11/2/2020 -- 10:01:04 - - This is Suricata version 3.0 RELEASE
11/2/2020 -- 10:01:04 - - CPUs/cores online: 8
11/2/2020 -- 10:01:04 - - Adding interface 0 from config file
11/2/2020 -- 10:01:04 - - Adding interface 1 from config file
11/2/2020 -- 10:01:04 - - 'default' server has 'request-body-minimal-inspect-size' set to 33882 and 'request-body-inspect-window' set to 4053 after randomization.
11/2/2020 -- 10:01:04 - - 'default' server has 'response-body-minimal-inspect-size' set to 42119 and 'response-body-inspect-window' set to 16872 after randomization.
11/2/2020 -- 10:01:04 - - Protocol detection and parser disabled for smb protocol.
11/2/2020 -- 10:01:04 - - Protocol detection and parser disabled for dcerpc protocol.
11/2/2020 -- 10:01:04 - - Protocol detection and parser disabled for dcerpc protocol.
11/2/2020 -- 10:01:04 - - Parsed disabled for ftp protocol. Protocol detectionstill on.
11/2/2020 -- 10:01:04 - - Protocol detection and parser disabled for smtp protocol.
11/2/2020 -- 10:01:04 - - DNS request flood protection level: 500
11/2/2020 -- 10:01:04 - - DNS per flow memcap (state-memcap): 524288
11/2/2020 -- 10:01:04 - - DNS global memcap: 16777216
11/2/2020 -- 10:01:04 - - Protocol detection and parser disabled for modbus protocol.
11/2/2020 -- 10:01:04 - - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
11/2/2020 -- 10:01:04 - - preallocated 65535 defrag trackers of size 168
11/2/2020 -- 10:01:04 - - defrag memory usage: 14679896 bytes, maximum: 33554432
11/2/2020 -- 10:01:04 - - AutoFP mode using "Hash" flow load balancer
11/2/2020 -- 10:01:04 - - allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
11/2/2020 -- 10:01:04 - - preallocated 1000 hosts of size 136
11/2/2020 -- 10:01:04 - - host memory usage: 398144 bytes, maximum: 16777216
11/2/2020 -- 10:01:04 - - allocated 4194304 bytes of memory for the flow hash... 65536 buckets of size 64
11/2/2020 -- 10:01:04 - - preallocated 10000 flows of size 288
11/2/2020 -- 10:01:04 - - flow memory usage: 7074304 bytes, maximum: 671088640
11/2/2020 -- 10:01:04 - - stream "prealloc-sessions": 2048 (per thread)
11/2/2020 -- 10:01:04 - - stream "memcap": 33554432
11/2/2020 -- 10:01:04 - - stream "midstream" session pickups: disabled
11/2/2020 -- 10:01:04 - - stream "async-oneside": enabled
11/2/2020 -- 10:01:04 - - stream "checksum-validation": disabled
11/2/2020 -- 10:01:04 - - stream."inline": enabled
11/2/2020 -- 10:01:04 - - stream "max-synack-queued": 5
11/2/2020 -- 10:01:04 - - stream.reassembly "memcap": 134217728
11/2/2020 -- 10:01:04 - - stream.reassembly "depth": 1048576
11/2/2020 -- 10:01:04 - - stream.reassembly "toserver-chunk-size": 2599
11/2/2020 -- 10:01:04 - - stream.reassembly "toclient-chunk-size": 2643
11/2/2020 -- 10:01:04 - - stream.reassembly.raw: enabled
11/2/2020 -- 10:01:04 - - segment pool: pktsize 4, prealloc 256
11/2/2020 -- 10:01:04 - - segment pool: pktsize 16, prealloc 512
11/2/2020 -- 10:01:04 - - segment pool: pktsize 112, prealloc 512
11/2/2020 -- 10:01:04 - - segment pool: pktsize 248, prealloc 512
11/2/2020 -- 10:01:04 - - segment pool: pktsize 512, prealloc 512
11/2/2020 -- 10:01:04 - - segment pool: pktsize 768, prealloc 1024
11/2/2020 -- 10:01:04 - - segment pool: pktsize 1448, prealloc 1024
11/2/2020 -- 10:01:04 - - segment pool: pktsize 65535, prealloc 128
11/2/2020 -- 10:01:04 - - stream.reassembly "chunk-prealloc": 250
11/2/2020 -- 10:01:04 - - stream.reassembly "zero-copy-size": 128
11/2/2020 -- 10:01:04 - - allocated 262144 bytes of memory for the ippair hash... 4096 buckets of size 64
11/2/2020 -- 10:01:04 - - preallocated 1000 ippairs of size 136
11/2/2020 -- 10:01:04 - - ippair memory usage: 398144 bytes, maximum: 16777216
11/2/2020 -- 10:01:04 - - using magic-file /usr/share/file/magic
11/2/2020 -- 10:01:04 - - Delayed detect disabled
11/2/2020 -- 10:01:04 - - IP reputation disabled
11/2/2020 -- 10:01:04 - - [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Invalid rule-files configuration section: expected a list of filenames.
11/2/2020 -- 10:01:04 - - No signatures supplied.
11/2/2020 -- 10:01:04 - - Threshold config parsed: 0 rule(s) found
11/2/2020 -- 10:01:04 - - Core dump size set to unlimited.
11/2/2020 -- 10:01:04 - - fast output device (regular) initialized: fast.log
11/2/2020 -- 10:01:04 - - http-log output device (regular) initialized: http.log
11/2/2020 -- 10:01:04 - - stats output device (regular) initialized: stats.log
11/2/2020 -- 10:01:04 - - Device Name: 0
11/2/2020 -- 10:01:04 - - copy-interface 1
11/2/2020 -- 10:01:04 - - PortMap : Inport: 0 OutPort: 1 ringid 0
11/2/2020 -- 10:01:04 - - Device Name: 1
11/2/2020 -- 10:01:04 - - copy-interface 0
11/2/2020 -- 10:01:04 - - PortMap : Inport: 1 OutPort: 0 ringid 1
11/2/2020 -- 10:01:04 - - DPDK Version: DPDK 18.11.5
11/2/2020 -- 10:01:04 - - ----- Global DPDK-INTEL Config -----
11/2/2020 -- 10:01:04 - - Number Of Ports : 2
11/2/2020 -- 10:01:04 - - Operation Mode : IPS
11/2/2020 -- 10:01:04 - - Port:0, Map:1
11/2/2020 -- 10:01:04 - - Port:1, Map:0
11/2/2020 -- 10:01:04 - - ------------------------------------
11/2/2020 -- 10:01:04 - - DPDK OPMODE set to IPS!!!
11/2/2020 -- 10:01:04 - - DPDK OPMODE set to 2!!!
11/2/2020 -- 10:01:04 - - ----- Match Pattern ----
11/2/2020 -- 10:01:04 - - http: 0
11/2/2020 -- 10:01:04 - - ftp: 0
11/2/2020 -- 10:01:04 - - tls: 0
11/2/2020 -- 10:01:04 - - dns: 0
11/2/2020 -- 10:01:04 - - smtp: 0
11/2/2020 -- 10:01:04 - - ssh: 0
11/2/2020 -- 10:01:04 - - smb: 0
11/2/2020 -- 10:01:04 - - smb2: 0
11/2/2020 -- 10:01:04 - - dcerpc:0
11/2/2020 -- 10:01:04 - - tcp: 0
11/2/2020 -- 10:01:04 - - udp: 0
11/2/2020 -- 10:01:04 - - sctp: 0
11/2/2020 -- 10:01:04 - - icmpv4:0
11/2/2020 -- 10:01:04 - - icmpv6:0
11/2/2020 -- 10:01:04 - - gre: 0
11/2/2020 -- 10:01:04 - - raw: 0
11/2/2020 -- 10:01:04 - - ipv4: 0
11/2/2020 -- 10:01:04 - - ipv6: 0
11/2/2020 -- 10:01:04 - - -----------------------
11/2/2020 -- 10:01:04 - - ----- ACL IPV4 DUMP (0) ----
11/2/2020 -- 10:01:04 - - ----- ACL IPV6 DUMP (0) ----
11/2/2020 -- 10:01:04 - - Going to use 1 thread(s)
11/2/2020 -- 10:01:04 - - preallocated 65000 packets. Total memory 229580000
11/2/2020 -- 10:01:04 - - Going to use 1 thread(s)
11/2/2020 -- 10:01:04 - - preallocated 65000 packets. Total memory 229580000
11/2/2020 -- 10:01:04 - - using 1 flow manager threads
11/2/2020 -- 10:01:04 - - preallocated 65000 packets. Total memory 229580000
11/2/2020 -- 10:01:04 - - using 1 flow recycler threads
11/2/2020 -- 10:01:04 - - all 2 packet processing threads, 4 management threads initialized, engine started.
11/2/2020 -- 10:01:04 - - master_lcore 7 lcore_count 4
11/2/2020 -- 10:01:04 - - cpuIndex 10 lcore_id 4
11/2/2020 -- 10:01:04 - - master_lcore 7 lcore_count 4
11/2/2020 -- 10:01:04 - - cpuIndex 30 lcore_id 5
11/2/2020 -- 10:01:04 - - ============ IPS inside ReceiveDpdkPkts_IPS_10000 =============
11/2/2020 -- 10:01:04 - - DPDK Started in IPS Mode!!!
11/2/2020 -- 10:01:04 - - port 0, core 4, enable 1, socket 0 phy 0
11/2/2020 -- 10:01:04 - - ============ IPS inside ReceiveDpdkPkts_IPS_10000 =============
11/2/2020 -- 10:01:04 - - port 1, core 5, enable 1, socket 0 phy 0
^C11/2/2020 -- 10:03:09 - - Signal Received. Stopping engine.
11/2/2020 -- 10:03:09 - - 0 new flows, 0 established flows were timed out, 0 flows in closed state
11/2/2020 -- 10:03:09 - - preallocated 65000 packets. Total memory 229580000
11/2/2020 -- 10:03:09 - - time elapsed 125.134s
11/2/2020 -- 10:03:10 - - 0 flows processed
11/2/2020 -- 10:03:10 - - (RxDPDKINTEL01) Packets 0, bytes 0
11/2/2020 -- 10:03:10 - - --- thread stats for Intf: 0 to 1 ---
11/2/2020 -- 10:03:10 - - +++ ACL +++
11/2/2020 -- 10:03:10 - - - non IP 0
11/2/2020 -- 10:03:10 - - +++ ipv4 0 +++
11/2/2020 -- 10:03:10 - - - lookup: success 0, fail 0
11/2/2020 -- 10:03:10 - - - result: hit 0, miss 0
11/2/2020 -- 10:03:10 - - +++ ipv6 0 +++
11/2/2020 -- 10:03:10 - - - lookup: success 0, fail 0
11/2/2020 -- 10:03:10 - - - result: hit 0, miss 0
11/2/2020 -- 10:03:10 - - +++ ring +++
11/2/2020 -- 10:03:10 - - ERR: full 0, enq 0, tx 0
11/2/2020 -- 10:03:10 - - +++ port 0 +++
11/2/2020 -- 10:03:10 - - - index 0 pkts RX 71209425 TX 71209425 MISS 0
11/2/2020 -- 10:03:10 - - - Errors RX: 0 TX: 0 Mbuff: 0
11/2/2020 -- 10:03:10 - - - Queue Dropped pkts: 0
11/2/2020 -- 10:03:10 - - ----------------------------------
11/2/2020 -- 10:03:10 - - Stream TCP processed 0 TCP packets
11/2/2020 -- 10:03:10 - - Fast log output wrote 0 alerts
11/2/2020 -- 10:03:10 - - HTTP logger logged 0 requests
11/2/2020 -- 10:03:10 - - (RxDPDKINTEL11) Packets 0, bytes 0
11/2/2020 -- 10:03:10 - - --- thread stats for Intf: 1 to 0 ---
11/2/2020 -- 10:03:10 - - +++ ACL +++
11/2/2020 -- 10:03:10 - - - non IP 0
11/2/2020 -- 10:03:10 - - +++ ipv4 0 +++
11/2/2020 -- 10:03:10 - - - lookup: success 0, fail 0
11/2/2020 -- 10:03:10 - - - result: hit 0, miss 0
11/2/2020 -- 10:03:10 - - +++ ipv6 0 +++
11/2/2020 -- 10:03:10 - - - lookup: success 0, fail 0
11/2/2020 -- 10:03:10 - - - result: hit 0, miss 0
11/2/2020 -- 10:03:10 - - +++ ring +++
11/2/2020 -- 10:03:10 - - ERR: full 0, enq 0, tx 0
11/2/2020 -- 10:03:10 - - +++ port 1 +++
11/2/2020 -- 10:03:10 - - - index 1 pkts RX 71633873 TX 71209425 MISS 32821
11/2/2020 -- 10:03:10 - - - Errors RX: 0 TX: 0 Mbuff: 0
11/2/2020 -- 10:03:10 - - - Queue Dropped pkts: 0
11/2/2020 -- 10:03:10 - - ----------------------------------
11/2/2020 -- 10:03:10 - - Stream TCP processed 0 TCP packets
11/2/2020 -- 10:03:10 - - Fast log output wrote 0 alerts
11/2/2020 -- 10:03:10 - - HTTP logger logged 0 requests
11/2/2020 -- 10:03:10 - - ippair memory usage: 398144 bytes, maximum: 16777216
11/2/2020 -- 10:03:10 - - host memory usage: 398144 bytes, maximum: 16777216
11/2/2020 -- 10:03:10 - - cleaning up signature grouping structure... complete
11/2/2020 -- 10:03:10 - - Stats for '0': pkts: 0, drop: 0 (-nan%), invalid chksum: 0
11/2/2020 -- 10:03:10 - - Stats for '1': pkts: 0, drop: 0 (-nan%), invalid chksum: 0
It works, traffic is transferred from port to port!