Center-Sun / suricata-kafka-output

provides a Suricata Eve output for Kafka with Suricate Eve plugin
MIT License
14 stars 4 forks source link

How to add more partition in kafka Configuration #3

Open cybersecurity99 opened 2 years ago

cybersecurity99 commented 2 years ago

Hi @Center-Sun How can we add more partition configuration in this Kafka plugin ? As currently it is creating only 1 partition, so it can't handle high speed rate .

Thanks

Center-Sun commented 2 years ago

Hi @cybersecurity99 ,this plugin just for producer ,not for the manager of kafka service. if you want to creat more partition in topic , i think you can create multiple partition topics manually, then this plugin will send message to all partition with round-robin .

cybersecurity99 commented 2 years ago

Hi @cybersecurity99 ,this plugin just for producer ,not for the manager of kafka service. if you want to creat more partition in topic , i think you can create multiple partition topics manually, then this plugin will send message to all partition with round-robin .

@Center-Sun I tried to add partition manually , it did added . But I can see Error : Eve record lot due to full buffer . I even doubled memory to 2048 . Any possible thing you can mention to solve this thing.

Thanks

Center-Sun commented 2 years ago

@cybersecurity99 Have you tried increasing the buffer-size options in kafka section?

cybersecurity99 commented 2 years ago

@cybersecurity99 Have you tried increasing the buffer-size options in kafka section?

@Center-Sun yes its default is 1024 , I made it 2048

Center-Sun commented 2 years ago

@cybersecurity99 Have you tried increasing the buffer-size options in kafka section?

@Center-Sun yes its default is 1024 , I made it 2048

@cybersecurity99 dafault value of buffer-size is 65535

cybersecurity99 commented 2 years ago

@cybersecurity99 Have you tried increasing the buffer-size options in kafka section?

@Center-Sun yes its default is 1024 , I made it 2048

@cybersecurity99 dafault value of buffer-size is 65535

@Center-Sun default value is defined in src files that is 65535 , But in yaml confg we have a buffer field also , what this filed indicates . I have tried this increasing many times But I am always getting Eve buffer full error ( defined in lib.rs) Can you tell me what error indicate as FULL , what should I do

Center-Sun commented 2 years ago

@cybersecurity99 buffer-size set to 65535 , in my workfolw, 3300 message/s with 15MB/s traffic used about 22GB memorys。

cybersecurity99 commented 2 years ago

@cybersecurity99 buffer-size set to 65535 , in my workfolw, 3300 message/s with 15MB/s traffic used about 22GB memorys。

@Center-Sun can i increase const DEFAULT_BUFFER_SIZE: &str = "65535"; to a more big size to handle 1 gb/s traffic DO you think this plugin can handle 1gb/s

Center-Sun commented 2 years ago

@cybersecurity99 1gb/s origin traffic or 1gb/s kafka messges traffc? you should know ,this plugin just care about how many message should send to kafka brokers. in my case ,the suricate handler 2gb/s origin traffic ,and use my own ruleset ,produce 3300 events per second ,this plugin send 3300 message to kafka with 15M/s traffic , it did very well.

cybersecurity99 commented 2 years ago

@Center-Sun I mean 1gb/s of data to suricata , I am using a NIC card also , I made 28 threads of suricata to handle 1 gb/s data . But I am getting Eve Buffer Full error . As plugin have only 4 variable i.e brokers , topic , clientid , buffe-size (I made it to 2048 still no effect).

Can You think of something I am missing . also can you share your kafka configuration to handle suricata if possible .

Thanks

Center-Sun commented 2 years ago

@cybersecurity99 I think you should know how many eve message will be created by suricata , it's depend on the ruleset . My case is a reference,you can set buff-size to 65535 first , if buffer still full ,increase buffer-size according to actual situation

cybersecurity99 commented 2 years ago

@Center-Sun Hi , I recently discovered this error "Eve record lost due to broken channelsending on a closed channel" What exactly it means , is it kafka processing issue ?

Center-Sun commented 2 years ago

@cybersecurity99 It looks like have some trouble with  kafka brokers or network