Closed Shylockyk closed 4 years ago
Bro, I wanna know how the network pkts flow into the DPDK NIC, how it can catch pkts ?
Hi dude, I don't know why my port 0's status is always down, even though I add more than 1 port, the port 0's status is down, others' are up , or I change the NIC, port 0's status is down. like this: -- Port: 0 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Down Led for 5 sec.......
and this : --- DPDK Intel Ports ---
- Overall Ports: 2
-- Port: 0 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Down Led for 5 sec.......
-- Port: 1 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Up Led for 5 sec.......
and I can't judge whether this is the reason for I can't receive any pkts when I run this suricata, and the question #13 seem fix the bug, but I couldn't find the answer. the result: 12/10/2019 -- 20:33:54 - - This is Suricata version 3.0 RELEASE 12/10/2019 -- 20:33:54 - - DPDK Version: DPDK 17.11.3 12/10/2019 -- 20:33:54 - - ----- Global DPDK-INTEL Config ----- 12/10/2019 -- 20:33:54 - - Number Of Ports : 2 12/10/2019 -- 20:33:54 - - Operation Mode : IDS 12/10/2019 -- 20:33:54 - - Port:0, Map:0 12/10/2019 -- 20:33:54 - - Port:1, Map:0 12/10/2019 -- 20:33:54 - - ------------------------------------ 12/10/2019 -- 20:33:56 - - ----- Match Pattern ---- 12/10/2019 -- 20:33:56 - - http: 1 12/10/2019 -- 20:33:56 - - ftp: 0 12/10/2019 -- 20:33:56 - - tls: 0 12/10/2019 -- 20:33:56 - - dns: 0 12/10/2019 -- 20:33:56 - - smtp: 0 12/10/2019 -- 20:33:56 - - ssh: 0 12/10/2019 -- 20:33:56 - - smb: 0 12/10/2019 -- 20:33:56 - - smb2: 0 12/10/2019 -- 20:33:56 - - dcerpc:0 12/10/2019 -- 20:33:56 - - tcp: 1 12/10/2019 -- 20:33:56 - - udp: 0 12/10/2019 -- 20:33:56 - - sctp: 0 12/10/2019 -- 20:33:56 - - icmpv6:0 12/10/2019 -- 20:33:56 - - gre: 0 12/10/2019 -- 20:33:56 - - raw: 0 12/10/2019 -- 20:33:56 - - ipv4: 0 12/10/2019 -- 20:33:56 - - * ipv6: 0 12/10/2019 -- 20:33:56 - - ----------------------- 12/10/2019 -- 20:33:56 - - all 2 packet processing threads, 4 management threads initialized, engine started. 12/10/2019 -- 20:33:56 - - master_lcore 0 lcore_count 4 12/10/2019 -- 20:33:56 - - cpuIndex 2 lcore_id 1 12/10/2019 -- 20:33:56 - - Frame Parser for IDS Mode 12/10/2019 -- 20:33:56 - - IDS ports 2, core 1, enble 1, scket 0 phy 0 12/10/2019 -- 20:33:56 - - DPDK Started in IDS Mode!!! ^C12/10/2019 -- 20:34:15 - - Signal Received. Stopping engine. 12/10/2019 -- 20:34:15 - - IDS port 0 12/10/2019 -- 20:34:15 - - - pkts: RX 0 TX 0 MISS 0 12/10/2019 -- 20:34:15 - - - ring: full 0, enq err 0, tx err 0 12/10/2019 -- 20:34:15 - - - SC Pkt: fail 0, Process Fail 0 12/10/2019 -- 20:34:15 - - IDS port 1 12/10/2019 -- 20:34:15 - - - pkts: RX 0 TX 0 MISS 0 12/10/2019 -- 20:34:15 - - - ring: full 0, enq err 0, tx err 0 12/10/2019 -- 20:34:15 - - - SC Pkt: fail 0, Process Fail 0 12/10/2019 -- 20:34:16 - - Intf : 0 12/10/2019 -- 20:34:16 - - + ring full 0, enq err 0, tx err 0 12/10/2019 -- 20:34:16 - - + fail: Packet alloc 0, Fail 0 12/10/2019 -- 20:34:16 - - + Errors RX: 0 TX: 0 Mbuff: 0 12/10/2019 -- 20:34:16 - - + Queue Dropped pkts: 0 12/10/2019 -- 20:34:16 - - ---------------------------------------------------------- 12/10/2019 -- 20:34:16 - - Intf : 1 12/10/2019 -- 20:34:16 - - + ring full 0, enq err 0, tx err 0 12/10/2019 -- 20:34:16 - - + fail: Packet alloc 0, Fail 0 12/10/2019 -- 20:34:16 - - + Errors RX: 0 TX: 0 Mbuff: 0 12/10/2019 -- 20:34:16 - - + Queue Dropped pkts: 0 12/10/2019 -- 20:34:16 - - ---------------------------------------------------------- 12/10/2019 -- 20:34:16 - - Stats for '0': pkts: 0, drop: 0 (-nan%), invalid chksum: 0 12/10/2019 -- 20:34:16 - - Stats for '1': pkts: 0, drop: 0 (-nan%), invalid chksum: 0
the network environment:
Network devices using DPDK-compatible driver
============================================ 0000:00:08.0 '82540EM Gigabit Ethernet Controller 100e' drv=igb_uio unused=e1000 0000:00:09.0 '82540EM Gigabit Ethernet Controller 100e' drv=igb_uio unused=e1000
Network devices using kernel driver
=================================== 0000:00:03.0 '82540EM Gigabit Ethernet Controller 100e' if=enp0s3 drv=e1000 unused=igb_uio Active 0000:00:0a.0 '82540EM Gigabit Ethernet Controller 100e' if=enp0s10 drv=e1000 unused=igb_uio Active
the configuration file suricata.yaml I hava changed:
dpdkintel support
dpdkintel:
inputs:
- interface: 0 copy-interface: 0000:00:08.0
for ids you can ignore copy-interface
- interface: 1 copy-interface: 0000:00:09.0
for ids you can ignore copy-interface
Select dpdk intel operation mode ips|ids|bypass
opmode: ips
opmode: ids
.......
Configure the type of alert (and other) logging you would like.
outputs:
a line based alerts log similar to Snort's fast.log
- fast: enabled: yes filename: fast.log append: yes
filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
Extensible Event Format (nicknamed EVE) event log in JSON format
- eve-log: enabled: yes filetype: regular #regular|syslog|unix_dgram|unix_stream|redis filename: eve.json .... types:
- alert: payload: yes # enable dumping payload in Base64 payload-printable: yes # enable dumping payload in printable (lossy) format packet: yes # enable dumping of packet (without stream segments) http: yes # enable dumping of http fields
tls: yes # enable dumping of tls fields
ssh: yes # enable dumping of ssh fields
smtp: yes # enable dumping of smtp fields
.....
the rules test-baidu.rules I write :
alert http any any -> any any (msg:"hit baidu.com..."; content:"baidu"; reference:url, www.baidu.com;)
the command I run:
$./src/suricata -c suricata.yaml -s /etc/suricata/rules/test-baidu.rules --dpdkintel
and I use the browser access to www.baidu.com or I dump the pcap of the flow to www.baidu.com and use $tcpreplay -i enp0s3 -l 2 -M 10 baidu.pcap,and the pkts is 0.
There multiple things which looks like configuration incorrect scenario.
00:08.0 and 00:09.0 are dpdk ports, but 00:08.0 looks like link is not connected or other end of the link is not auto negotiate. To check this unbing from igb_uio
and bind with kernel driver. Check with ethtool.
Suricata.yaml file represents dpdk ports. Hence in ips
mode tge dpdk ports are 0 and 1. Which makes interface 0 and copy interface 1 with interface 1 and copy interface 0
. I do not understand your logic of putting pcie bdf.
Dpdk procinfo tool and dpdk pdump tool can ger you stats and pkt dump which will help in debug
Bro, I wanna know how the network pkts flow into the DPDK NIC, how it can catch pkts ?
if you are using physical dpdk interface, its through pcie. If you are using virtual interface its tgrough memory region of pmd.
For packet capture and stats use dpdk tools procinfo and pdump.
sorry bro, In fact I wanna run in ids mode,so I set the copy interface casually. I'm sorry to waste your time, I should spend more time to learn more knowledge.
Hi bro, I'm sorry to bother you again. I just wanna know how to test the function of IDS mode, I try all ways I have known. But there is no pkt. I don't know how to communicate with dpdk NIC, it doesn't have IP address.So I can't make the network pkts flow in dpdk NIC.
**Hi dude, I don't know why my port 0's status is always down, As requested did try checking with kinuLinuxx kernel driver bind and ethtool?
even though I add more than 1 port, the port 0's status is down, others' are up , or I change the NIC, port 0's status is down. like this:** I dont understand the logic if adding ports to bring up port 0.
-- Port: 0 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Down Led for 5 sec....... and this : --- DPDK Intel Ports ---
- Overall Ports: 2
-- Port: 0 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Down Led for 5 sec....... -- Port: 1 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Up Led for 5 sec....... **and I can't judge whether this is the reason for I can't receive any pkts It migt depend upon rules you have. Onfigured. If you have set tcp and send non t p packet dpdk pre filter will drop.
when I run this suricata, and the question #13 seem fix the bug, but I couldn't find the answer. the result:**
12/10/2019 -- 20:33:54 - - This is Suricata version 3.0 RELEASE 12/10/2019 -- 20:33:54 - - DPDK Version: DPDK 17.11.3 12/10/2019 -- 20:33:54 - - ----- Global DPDK-INTEL Config ----- 12/10/2019 -- 20:33:54 - - Number Of Ports : 2 12/10/2019 -- 20:33:54 - - Operation Mode : IDS 12/10/2019 -- 20:33:54 - - Port:0, Map:0 12/10/2019 -- 20:33:54 - - Port:1, Map:0 12/10/2019 -- 20:33:54 - - ------------------------------------ 12/10/2019 -- 20:33:56 - - ----- Match Pattern ---- 12/10/2019 -- 20:33:56 - - http: 1 12/10/2019 -- 20:33:56 - - ftp: 0 12/10/2019 -- 20:33:56 - - tls: 0 12/10/2019 -- 20:33:56 - - dns: 0 12/10/2019 -- 20:33:56 - - smtp: 0 12/10/2019 -- 20:33:56 - - ssh: 0 12/10/2019 -- 20:33:56 - - smb: 0 12/10/2019 -- 20:33:56 - - smb2: 0 12/10/2019 -- 20:33:56 - - dcerpc:0 12/10/2019 -- 20:33:56 - - tcp: 1 12/10/2019 -- 20:33:56 - - udp: 0 12/10/2019 -- 20:33:56 - - sctp: 0 12/10/2019 -- 20:33:56 - - icmpv6:0 12/10/2019 -- 20:33:56 - - gre: 0 12/10/2019 -- 20:33:56 - - raw: 0 12/10/2019 -- 20:33:56 - - ipv4: 0 12/10/2019 -- 20:33:56 - - * ipv6: 0 12/10/2019 -- 20:33:56 - - ----------------------- 12/10/2019 -- 20:33:56 - - all 2 packet processing threads, 4 management threads initialized, engine started. 12/10/2019 -- 20:33:56 - - master_lcore 0 lcore_count 4 12/10/2019 -- 20:33:56 - - cpuIndex 2 lcore_id 1 12/10/2019 -- 20:33:56 - - Frame Parser for IDS Mode 12/10/2019 -- 20:33:56 - - IDS ports 2, core 1, enble 1, scket 0 phy 0 12/10/2019 -- 20:33:56 - - DPDK Started in IDS Mode!!! ^C12/10/2019 -- 20:34:15 - - Signal Received. Stopping engine. 12/10/2019 -- 20:34:15 - - IDS port 0 12/10/2019 -- 20:34:15 - - - pkts: RX 0 TX 0 MISS 0 12/10/2019 -- 20:34:15 - - - ring: full 0, enq err 0, tx err 0 12/10/2019 -- 20:34:15 - - - SC Pkt: fail 0, Process Fail 0 12/10/2019 -- 20:34:15 - - IDS port 1 12/10/2019 -- 20:34:15 - - - pkts: RX 0 TX 0 MISS 0 12/10/2019 -- 20:34:15 - - - ring: full 0, enq err 0, tx err 0 12/10/2019 -- 20:34:15 - - - SC Pkt: fail 0, Process Fail 0 12/10/2019 -- 20:34:16 - - Intf : 0 12/10/2019 -- 20:34:16 - - + ring full 0, enq err 0, tx err 0 12/10/2019 -- 20:34:16 - - + fail: Packet alloc 0, Fail 0 12/10/2019 -- 20:34:16 - - + Errors RX: 0 TX: 0 Mbuff: 0 12/10/2019 -- 20:34:16 - - + Queue Dropped pkts: 0 12/10/2019 -- 20:34:16 - - ---------------------------------------------------------- 12/10/2019 -- 20:34:16 - - Intf : 1 12/10/2019 -- 20:34:16 - - + ring full 0, enq err 0, tx err 0 12/10/2019 -- 20:34:16 - - + fail: Packet alloc 0, Fail 0 12/10/2019 -- 20:34:16 - - + Errors RX: 0 TX: 0 Mbuff: 0 12/10/2019 -- 20:34:16 - - + Queue Dropped pkts: 0 12/10/2019 -- 20:34:16 - - ---------------------------------------------------------- 12/10/2019 -- 20:34:16 - - Stats for '0': pkts: 0, drop: 0 (-nan%), invalid chksum: 0 12/10/2019 -- 20:34:16 - - Stats for '1': pkts: 0, drop: 0 (-nan%), invalid chksum: 0 Are you sending non tcp packet on port 1?
the network environment:
Network devices using DPDK-compatible driver
0000:00:08.0 '82540EM Gigabit Ethernet Controller 100e' drv=igb_uio unused=e1000
0000:00:09.0 '82540EM Gigabit Ethernet Controller 100e' drv=igb_uio unused=e1000 Network devices using kernel driver 0000:00:03.0 '82540EM Gigabit Ethernet Controller 100e' if=enp0s3 drv=e1000 unused=igb_uio Active 0000:00:0a.0 '82540EM Gigabit Ethernet Controller 100e' if=enp0s10 drv=e1000 unused=igb_uio Active
the configuration file suricata.yaml I hava changed:
dpdkintel support
dpdkintel: inputs:
- interface: 0 copy-interface: 0000:00:08.0
for ids you can ignore copy-interface
- interface: 1 copy-interface: 0000:00:09.0
for ids you can ignore copy-interface
Select dpdk intel operation mode ips|ids|bypass
opmode: ips
opmode: ids .......
Configure the type of alert (and other) logging you would like.
outputs:
a line based alerts log similar to Snort's fast.log
- fast: enabled: yes filename: fast.log append: yes
filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
Extensible Event Format (nicknamed EVE) event log in JSON format
eve-log: enabled: yes filetype: regular #regular|syslog|unix_dgram|unix_stream|redis filename: eve.json .... types:
- alert: payload: yes # enable dumping payload in Base64 payload-printable: yes # enable dumping payload in printable (lossy) format packet: yes # enable dumping of packet (without stream segments) http: yes # enable dumping of http fields
tls: yes # enable dumping of tls fields
ssh: yes # enable dumping of ssh fields
smtp: yes # enable dumping of smtp fields
.....
the rules test-baidu.rules I write :
alert http any any -> any any (msg:"hit baidu.com..."; content:"baidu"; reference:url, www.baidu.com;)
the command I run:
$./src/suricata -c suricata.yaml -s /etc/suricata/rules/test-baidu.rules --dpdkintel and I use the browser access to www.baidu.com or I dump the pcap of the flow to www.baidu.com and use $tcpreplay -i enp0s3 -l 2 -M 10 baidu.pcap,and the pkts is 0.
There multiple things which looks like configuration incorrect scenario.
- 00:08.0 and 00:09.0 are dpdk ports, but 00:08.0 looks like link is not connected or other end of the link is not auto negotiate. To check this unbing from
igb_uio
and bind with kernel driver. Check with ethtool.- Suricata.yaml file represents dpdk ports. Hence in
ips
mode tge dpdk ports are 0 and 1. Which makesinterface 0 and copy interface 1 with interface 1 and copy interface 0
. I do not understand your logic of putting pcie bdf.- Dpdk procinfo tool and dpdk pdump tool can ger you stats and pkt dump which will help in debug
Hi bro, I'm sorry to bother you again. I just wanna know how to test the function of IDS mode, I try all ways I have known. But there is no pkt. I don't know how to communicate with dpdk NIC, it doesn't have IP address.So I can't make the network pkts flow in dpdk NIC.
Sorry to quote a lot....
Check port0 with linux kernel driver and ethtool. If its down then it is device issue. The pre filter in ids only allows packet of the rule type to suricata, But your stats is showing 0 packets on port 0 and port1, are you sendibg tcp or http packets as per your rule configuration
the rules I am sure it is no problem. I download and test suricata3.0 without dpdk.It shows expectant result,because I know the NIC device name, just listening to it and sending http packets as the rules to the NIC.When I bind dpdk driver to NIC,everything change... I don't know how to sending http packets to the dpdk NIC, maybe the port0 is the issue⊙︿⊙
the rules I am sure it is no problem. I download and test suricata3.0 without dpdk.It shows expectant result,because I know the NIC device name, just listening to it and sending http packets as the rules to the NIC.When I bind dpdk driver to NIC,everything change... I don't know how to sending http packets to the dpdk NIC, maybe the port0 is the issue⊙︿⊙
You have totally not understood about how dpdk pre stager works. Hence made the comment about rules aee ok.
If port 0 is down, did you try sending traffic on up port?
Unless you share for live debug, there are nothing more I can help on thus
Sorry, I really don't know the 'dpdk pre stager', what is that,why it is wrong that I judge the rules are ok....I'm appreciate that if you can tell me.
the port 0 is strange,I have three NICs: enp0s8, enp0s9, enp0s10, and whichever NIC I bind dpdk driver as port 0, it is always down.It means, when I bind enp0s8 as port 0, it is down, when I bind enp0s9 as port 0, it is down, when I bind enp0s8 and enp0s9, the enp0s8 as port is down,the enp0s9 is up. When I bind enp0s9 and enp0s10, the enp0s9 is down, the enp0s10 is up.
did you try sending traffic on up port?
answer: the most important point is that I don't know how to send traffic to the dpdk port, I try all ways I hava known, It doesn't work, can you share your method to test function of IDS?
How to debug, procinfo and pdump? if they are, I will try. I haven't known it before. (┬_┬)
How can connect to your system for debug? Can you share the mode of chat skype gtalk?
Sorry, it is not convenient for me to connect and talk and sorry to make you frustrated. Actually the first thing I want to know is how to send traffic to the dpdk port, pktgen? whether I must set IP address to the dpdk NIC?
> Sorry, it is not convenient for me to connect and talk
As mentioned earlier, the skype or gtalk is for chat and no talk. But I humbly respect your decision.
> Actually the first thing I want to know is how to send traffic to the dpdk port, pktgen? whether I must set IP address to the dpdk NIC?
I think you are confusing with DPDK machine ports with an external machine. I am assuming you have 2 instances
On suricata instance, you have ports enp0s8, enp0s9, enp0s1. While on external traffic you have port 1 and port 2. ie: Suricata (port enp0s8) connected with external traffic (port 1)
and Suricata (port enp0s9) connected with external traffic (port 2)
.
If the above is true, without dpdk suricata ethool enp0s8 on suricata should show link up
. Similarly for enp0s9
too. Then the ports for IDS can be 1 or many such as enp0s8 and enp0s9
.
With this, I stop the responses on the ticket side as my help is limited since I am not able to login to box or chat with you to explore the issue.
Bro!! My Skype: q1210326580@outlook.com
Updating for future reference and others.
Hence unable to help on this. Marking this closed
Hi dude, I don't know why my port 0's status is always down, even though I add more than 1 port, the port 0's status is down, others' are up , or I change the NIC, port 0's status is down. like this: -- Port: 0 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Down Led for 5 sec.......
and this : --- DPDK Intel Ports ---
Overall Ports: 2
-- Port: 0 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Down Led for 5 sec.......
-- Port: 1 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Up Led for 5 sec.......
and I can't judge whether this is the reason for I can't receive any pkts when I run this suricata, and the question #13 seem fix the bug, but I couldn't find the answer. the result: 12/10/2019 -- 20:33:54 - - This is Suricata version 3.0 RELEASE
12/10/2019 -- 20:33:54 - - DPDK Version: DPDK 17.11.3
12/10/2019 -- 20:33:54 - - ----- Global DPDK-INTEL Config -----
12/10/2019 -- 20:33:54 - - Number Of Ports : 2
12/10/2019 -- 20:33:54 - - Operation Mode : IDS
12/10/2019 -- 20:33:54 - - Port:0, Map:0
12/10/2019 -- 20:33:54 - - Port:1, Map:0
12/10/2019 -- 20:33:54 - - ------------------------------------
12/10/2019 -- 20:33:56 - - ----- Match Pattern ----
12/10/2019 -- 20:33:56 - - http: 1
12/10/2019 -- 20:33:56 - - ftp: 0
12/10/2019 -- 20:33:56 - - tls: 0
12/10/2019 -- 20:33:56 - - dns: 0
12/10/2019 -- 20:33:56 - - smtp: 0
12/10/2019 -- 20:33:56 - - ssh: 0
12/10/2019 -- 20:33:56 - - smb: 0
12/10/2019 -- 20:33:56 - - smb2: 0
12/10/2019 -- 20:33:56 - - dcerpc:0
12/10/2019 -- 20:33:56 - - tcp: 1
12/10/2019 -- 20:33:56 - - udp: 0
12/10/2019 -- 20:33:56 - - sctp: 0
12/10/2019 -- 20:33:56 - - icmpv6:0
12/10/2019 -- 20:33:56 - - gre: 0
12/10/2019 -- 20:33:56 - - raw: 0
12/10/2019 -- 20:33:56 - - ipv4: 0
12/10/2019 -- 20:33:56 - - * ipv6: 0
12/10/2019 -- 20:33:56 - - -----------------------
12/10/2019 -- 20:33:56 - - all 2 packet processing threads, 4 management threads initialized, engine started.
12/10/2019 -- 20:33:56 - - master_lcore 0 lcore_count 4
12/10/2019 -- 20:33:56 - - cpuIndex 2 lcore_id 1
12/10/2019 -- 20:33:56 - - Frame Parser for IDS Mode
12/10/2019 -- 20:33:56 - - IDS ports 2, core 1, enble 1, scket 0 phy 0
12/10/2019 -- 20:33:56 - - DPDK Started in IDS Mode!!!
^C12/10/2019 -- 20:34:15 - - Signal Received. Stopping engine.
12/10/2019 -- 20:34:15 - - IDS port 0
12/10/2019 -- 20:34:15 - - - pkts: RX 0 TX 0 MISS 0
12/10/2019 -- 20:34:15 - - - ring: full 0, enq err 0, tx err 0
12/10/2019 -- 20:34:15 - - - SC Pkt: fail 0, Process Fail 0
12/10/2019 -- 20:34:15 - - IDS port 1
12/10/2019 -- 20:34:15 - - - pkts: RX 0 TX 0 MISS 0
12/10/2019 -- 20:34:15 - - - ring: full 0, enq err 0, tx err 0
12/10/2019 -- 20:34:15 - - - SC Pkt: fail 0, Process Fail 0
12/10/2019 -- 20:34:16 - - Intf : 0
12/10/2019 -- 20:34:16 - - + ring full 0, enq err 0, tx err 0
12/10/2019 -- 20:34:16 - - + fail: Packet alloc 0, Fail 0
12/10/2019 -- 20:34:16 - - + Errors RX: 0 TX: 0 Mbuff: 0
12/10/2019 -- 20:34:16 - - + Queue Dropped pkts: 0
12/10/2019 -- 20:34:16 - - ----------------------------------------------------------
12/10/2019 -- 20:34:16 - - Intf : 1
12/10/2019 -- 20:34:16 - - + ring full 0, enq err 0, tx err 0
12/10/2019 -- 20:34:16 - - + fail: Packet alloc 0, Fail 0
12/10/2019 -- 20:34:16 - - + Errors RX: 0 TX: 0 Mbuff: 0
12/10/2019 -- 20:34:16 - - + Queue Dropped pkts: 0
12/10/2019 -- 20:34:16 - - ----------------------------------------------------------
12/10/2019 -- 20:34:16 - - Stats for '0': pkts: 0, drop: 0 (-nan%), invalid chksum: 0
12/10/2019 -- 20:34:16 - - Stats for '1': pkts: 0, drop: 0 (-nan%), invalid chksum: 0
the network environment:
Network devices using DPDK-compatible driver
============================================ 0000:00:08.0 '82540EM Gigabit Ethernet Controller 100e' drv=igb_uio unused=e1000 0000:00:09.0 '82540EM Gigabit Ethernet Controller 100e' drv=igb_uio unused=e1000
Network devices using kernel driver
=================================== 0000:00:03.0 '82540EM Gigabit Ethernet Controller 100e' if=enp0s3 drv=e1000 unused=igb_uio Active 0000:00:0a.0 '82540EM Gigabit Ethernet Controller 100e' if=enp0s10 drv=e1000 unused=igb_uio Active
the configuration file suricata.yaml I hava changed:
dpdkintel support
dpdkintel:
inputs:
for ids you can ignore copy-interface
interface: 1 copy-interface: 0000:00:09.0
for ids you can ignore copy-interface
Select dpdk intel operation mode ips|ids|bypass
opmode: ips
opmode: ids
.......
Configure the type of alert (and other) logging you would like.
outputs:
a line based alerts log similar to Snort's fast.log
fast: enabled: yes filename: fast.log append: yes
filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
Extensible Event Format (nicknamed EVE) event log in JSON format
tls: yes # enable dumping of tls fields
.....
the rules test-baidu.rules I write :
alert http any any -> any any (msg:"hit baidu.com..."; content:"baidu"; reference:url, www.baidu.com;)
the command I run:
$./src/suricata -c suricata.yaml -s /etc/suricata/rules/test-baidu.rules --dpdkintel
and I use the browser access to www.baidu.com or I dump the pcap of the flow to www.baidu.com and use $tcpreplay -i enp0s3 -l 2 -M 10 baidu.pcap,and the pkts is 0.