Open ssi0202 opened 3 years ago
Just stumbled upon this issue as well. I managed to quickfix the problem by forcing into logstash’s pipeline configs a different client_id for the two kafka inputs (in 0002-kafka-input.conf and 0006-kafka-zeek-input.conf).
Hello @ssi0202 and @SRJanel ! Do you still have the same issue?
Hello @Cyb3rWard0g My docker containers have been running without any problem since my quickfix. Want me to pull and run again ?
I'm seeing this same error. Fresh install. I don't see client_id (i see group_id) in these input files.
Is there a simple fix to this ?
Hello @DerekKing001 , try to provide a client_id inside the Kafka input plugin in both files I mentioned above. They should have different values (e.g. client_id => "something1"
and client_id => "something2"
).
Then restart containers and it should work.
fresh install on ubuntu the logstash log is full of this, and no data is getting ingested. I have just set up a winlogbeat to ship data from a client machine
below is the full beginning of the *error" part of the logstash log, output is from using the docker follow helk-logstash command