Closed DubheStar closed 5 years ago
Did you check yaml file ?
dpdkintel:
inputs:
interface: 0000:0a:00.0 copy-interface: 0000:09:00.0
opmode: ips
9/4/2019 -- 02:19:06 -
I guess yaml file have problemsWould you send a yaml file to wyz05170517@gmail.com?
I will be very grateful to you;)
It looks like when you rebuild the suricata suricata.yaml.in template replaced the orginal suricata.yaml. Hence follow these steps and let me know.
Open suricata.yaml and check for 'dpdkintel:'. If you do not find it then suricata.yaml is replaced. Solution> copy the suricata.yaml in github and replace in your folder
run './suricata --list-runmodes', if dpdk mode is not listed Solution> please check configure.log to check if Suricata is build with dpdk option.
note: I am not sure why you want me 'I guess yaml file have problemsWould you send a yaml file to wyz05170517@gmail.com?'
please update asap
root@ubuntu:~/suricata-3.0/src# ./suricata --list-runmodes EAL: Detected 4 lcore(s) EAL: No free hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: PCI device 0000:02:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:04.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:05.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:06.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:07.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em ------------------------------------- Runmodes ------------------------------------------ | RunMode Type | Custom Mode | Description |
---|---|---|---|
PCAP_DEV | single | Single threaded pcap live mode | |
--------------------------------------------------------------------- | |||
autofp | Multi threaded pcap live mode. Packets from each flow are assigned to a single detect thread, unlike "pcap_live_auto" where packets from the same flow can be processed by any detect thread | ||
--------------------------------------------------------------------- | |||
workers | Workers pcap live mode, each thread does all tasks from acquisition to logging | ||
---------------------------------------------------------------------------------------- | |||
PCAP_FILE | single | Single threaded pcap file mode | |
--------------------------------------------------------------------- | |||
autofp | Multi threaded pcap file mode. Packets from each flow are assigned to a single detect thread, unlike "pcap-file-auto" where packets from the same flow can be processed by any detect thread | ||
---------------------------------------------------------------------------------------- | |||
PFRING(DISABLED) | autofp | Multi threaded pfring mode. Packets from each flow are assigned to a single detect thread, unlike "pfring_auto" where packets from the same flow can be processed by any detect thread | |
--------------------------------------------------------------------- | |||
single | Single threaded pfring mode | ||
--------------------------------------------------------------------- | |||
workers | Workers pfring mode, each thread does all tasks from acquisition to logging | ||
---------------------------------------------------------------------------------------- | |||
NFQ | autofp | Multi threaded NFQ IPS mode with respect to flow | |
--------------------------------------------------------------------- | |||
workers | Multi queue NFQ IPS mode with one thread per queue | ||
---------------------------------------------------------------------------------------- | |||
NFLOG | autofp | Multi threaded nflog mode | |
--------------------------------------------------------------------- | |||
single | Single threaded nflog mode | ||
--------------------------------------------------------------------- | |||
workers | Workers nflog mode | ||
---------------------------------------------------------------------------------------- | |||
IPFW | autofp | Multi threaded IPFW IPS mode with respect to flow | |
--------------------------------------------------------------------- | |||
workers | Multi queue IPFW IPS mode with one thread per queue | ||
---------------------------------------------------------------------------------------- | |||
ERF_FILE | single | Single threaded ERF file mode | |
--------------------------------------------------------------------- | |||
autofp | Multi threaded ERF file mode. Packets from each flow are assigned to a single detect thread | ||
---------------------------------------------------------------------------------------- | |||
ERF_DAG | autofp | Multi threaded DAG mode. Packets from each flow are assigned to a single detect thread, unlike "dag_auto" where packets from the same flow can be processed by any detect thread | |
--------------------------------------------------------------------- | |||
single | Singled threaded DAG mode | ||
--------------------------------------------------------------------- | |||
workers | Workers DAG mode, each thread does all tasks from acquisition to logging | ||
---------------------------------------------------------------------------------------- | |||
AF_PACKET_DEV | single | Single threaded af-packet mode | |
--------------------------------------------------------------------- | |||
workers | Workers af-packet mode, each thread does all tasks from acquisition to logging | ||
--------------------------------------------------------------------- | |||
autofp | Multi socket AF_PACKET mode. Packets from each flow are assigned to a single detect thread. | ||
---------------------------------------------------------------------------------------- | |||
NETMAP(DISABLED) | single | Single threaded netmap mode | |
--------------------------------------------------------------------- | |||
workers | Workers netmap mode, each thread does all tasks from acquisition to logging | ||
--------------------------------------------------------------------- | |||
autofp | Multi threaded netmap mode. Packets from each flow are assigned to a single detect thread. | ||
---------------------------------------------------------------------------------------- | |||
UNIX_SOCKET | single | Unix socket mode | |
---------------------------------------------------------------------------------------- |
And this is my compile command ./configure --enable-dpdkintel --prefix=/usr --sysconfdir=/etc --localstatedir=/var
updated runmodes.c and runodes.h. Do a git pull and try.
HI ~
Thank you for your patience. The last problem has been solved
This is my running record rightnow,Why Unknown speed for 0? please see below detail.
suricata -c /etc/suricata/suricata.yaml --dpdkintel
10/4/2019 -- 13:23:10 -
0000:02:04.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' drv=igb_uio unused=e1000 0000:02:05.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' drv=igb_uio unused=e1000
2.testpmd
testpmd> start io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled Logical Core 1 (socket 0) forwards packets on 2 streams: RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01 RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32 nb forwarding cores=1 - nb forwarding ports=2 port 0: CRC stripping enabled RX queues=1 - RX desc=128 - RX free threshold=0 RX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX queues=1 - TX desc=512 - TX free threshold=0 TX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX RS bit threshold=0 - TXQ flags=0x0 port 1: CRC stripping enabled RX queues=1 - RX desc=128 - RX free threshold=0 RX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX queues=1 - TX desc=512 - TX free threshold=0 TX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX RS bit threshold=0 - TXQ flags=0x0 testpmd> stop Telling cores to stop... Waiting for lcores to finish...
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ RX-packets: 28325 RX-dropped: 0 RX-total: 28325 TX-packets: 28317 TX-dropped: 0 TX-total: 28317 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done. emmm, Does the virtual machine environment not support this program?
My DPDK NIC has no packet,The cause of this problem is the network card will no packets?
I have already answered the query with the solution. did you try updating 'configure' for VM nic?
I have tested on host and Virtual Machine with Intel NIC
Sorry. How do I update configure. I don't know how to modify it. :(
have worked on
I am sorry to make you feel embarrassed. Actually,I need to build a NIDS in 10Gb/s environment .I have less development experience.
And the 82545EM should be a intel NIC.Do I need to add a configuration? if not,please how add non intel NIC in configure
please share details for ssh to my mail id. let me try to debug online
you can use https://github.com/vipinpv85/DPDK-Suricata_3.0/issues/11
@DubheStar Hi, I face the same problem just as you said :
1/8/2019 -- 10:42:40 -
I want to konw if you already fixed the issue. If you do ,please tell me something , Thanks!
Problem definition is not clear. Requested multiple times to cross check the environment. Still waiting to hear back.
Hi~ :
An error occurred while I was running DPDK-suricata_3.0 root@ubuntu:~# suricata --dpdkintel EAL: Detected 4 lcore(s) EAL: No free hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: PCI device 0000:02:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:04.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:05.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em 9/4/2019 -- 00:01:47 - - This is Suricata version 3.0 RELEASE
ERROR: No interface found for DPDK Inte
root@ubuntu:~# suricata --list-dpdkintel-ports EAL: Detected 4 lcore(s) EAL: No free hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: PCI device 0000:02:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:04.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:05.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em
--- DPDK Intel Ports ---
Overall Ports: 2
-- Port: 0 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Up Led for 5 sec.......
-- Port: 1 --- MTU: 1500 --- MAX RX MTU: 16128 --- Driver: net_e1000_em --- Index: 0 --- Queues RX 1 & TX 1 --- SRIOV VF: 0 --- Offload RX: f TX: f --- CPU NUMA node: 0 --- Status: Up Led for 5 sec.......
Details (please complete the following information):
PCIe Information: Vmware And i set 3 network interface : Network devices using DPDK-compatible driver
0000:02:04.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' drv=igb_uio unused=e1000 0000:02:05.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' drv=igb_uio unused=e1000
Network devices using kernel driver
0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens32 drv=e1000 unused=igb_uio Active
Other Network devices