Closed novaksam closed 1 year ago
@novaksam could you describe what are you trying to achieve in terms of configuration? I need the big picture in order to guide you through the correct configuration. E.g. why are you creating a FT configuration file?
I'm looking to generate netflow via nProbe, get IDS alerts with Suricata, and I wanted to start exploring bro, all from a single host. Right now I have an optical mirror feeding the two boxes I'm running suricata and nprobe on, but because of some networking changes we're making (moving from a 10Gb in/egress to 2x 10Gb Active/Active in/egress) I wanted to consolidate down to a single host.
The flow tables would discard traffic from things like youtube and netflix from being handled by Suricata and bro, but I'd still care about those flows for nProbe.
You zbalance_ipc configuration looks fine, I see you are combining zbalance_ipc with RSS to distribute the load across multiple zbalance_ipc instances which is fine. The only mistake a I see is that you are using zc:eth1@1 zc:eth1@2 instead of zc:eth1@0 zc:eth1@1
As of the FT configuration to filter out Youtube, Netflix, etc on suricata and bro, please check the PF_RING user's guide for instructions on how to enable the filtering on the applications
Just to clarify, using flow tables won't prevent those flows from being processed by nProbe? I don't have any experience using FT, so I'm not sure exactly how it hooks in.
If you configure L7 filtering using FT in Suricata or Zeek, it does not affect nProbe, as FT runs inside Suricata/Zeek in that case.
Awesome, thank you so much for the help!
Hi there,
Not a bug, but I'm not sure about the syntax needed to send full packets, with multiple queues (zc@1, zc@2, etc) and flow tables to multiple processes.
I'm looking to centralize nProbe, suricata and bro on to a single host, and I think the following syntax/config would get me most of the way there:
/etc/pf_ring/pf_ring.conf
min_num_slots=65536
/etc/pf_ring/zc/i40e/i40e.conf
RSS=2,2
/etc/pf_ring/ft-rules.conf
I'm going to have 2 10Gb ingress mirrors, and two 10Gb egress mirrors:
ingress
zbalance_ipc -i zc:eth1@1,zc:eth2@1 -n 1,2 -c 10
zbalance_ipc -i zc:eth1@2,zc:eth2@2 -n 1,2 -c 11
egresszbalance_ipc -i zc:eth3@1,zc:eth4@1 -n 1,2 -c 12
zbalance_ipc -i zc:eth3@2,zc:eth4@2 -n 1,2 -c 13
and then suricata would connect to zc:10@1, zc:11@1, zc:12@1, zc:13@1, bro to @2 and nProbe to @3, but how does flow tables play into all of this? Currently suricata is using afpacket and having very few drops, but my historical nProbe hosts have had drops if I don't do RSS, so I want to account for that.
And I could be completely off track.
Thanks for you time!