This demo illustrates how to use Confluent to optimize your Security Information and Event Management (SIEM) solution. Active Development is occurring on https://github.com/confluentinc/demo-siem-optimization.
Refactor the repo so that unused files are removed and files are moved into subdirectories for clarity. Any file moved can potentially break the demo, so test the demo thoroughly for this.
[x] unbaked.yml -- currently the idea is to use this for hands-on labs and use docker-compose.yml to run the full demo. However, this logic can be separated out from docker-compose. The goal should be to have one docker-compose file.
[x] elastic-connector.json -- this file doesn't appear to be used anywhere. It was probably replaced by one of the scripts that submits connectors.
[x] scripts/startKafkaConnectDemo.sh -- this was probably used at one point but seems to have been replaced by scripts/startKafkaConnectComponents.sh. Remove it.
[x] Move zeek related files to a dedicated zeek folder
[x] splunk-add-on-for-cisco-asa_410.tgz appears to be unnecessary since the docker compose file pulls it from a gist
[x] The entire config directory appears to be duplicated into scripts/. Probably makes the most sense to rename it to sigma-config/ or just sigma and adjust the volume mapping in the confluent sigma container. Also add the two files in sigma-rules to this new sigma folder
[x] Create a connect folder to put all the connector json files and the Dockerfile for a connect image with the connector plugins installed. Can mount those connector json files into the connect container in the docker compose file. Then in the scripts directory, the scripts/startKafkaConnectComponents.sh script can be modified to use docker-compose exec curl (or just curl) to submit the connectors. For more details about separating build time vs run time concerns for connect, see:
[x] Refactor all the "submit" scripts into one script that takes a connector json file as an argument. Maybe this wouldn't even be necessary. Someone can run all the connectors with the script, or one at a time with a docker-compose exec curl command from the instructions.
[x] Move all splunk folders underneath one splunk/ and also move default.yml into it.
[x] move duo.yml into a folder called duo and rename the yml file to docker-compose.override.yml to make it clear what it is
Refactor the repo so that unused files are removed and files are moved into subdirectories for clarity. Any file moved can potentially break the demo, so test the demo thoroughly for this.
[x]
unbaked.yml
-- currently the idea is to use this for hands-on labs and usedocker-compose.yml
to run the full demo. However, this logic can be separated out from docker-compose. The goal should be to have one docker-compose file.[x]
elastic-connector.json
-- this file doesn't appear to be used anywhere. It was probably replaced by one of the scripts that submits connectors.[x]
scripts/startKafkaConnectDemo.sh
-- this was probably used at one point but seems to have been replaced byscripts/startKafkaConnectComponents.sh
. Remove it.[x] Move zeek related files to a dedicated zeek folder
[x]
splunk-add-on-for-cisco-asa_410.tgz
appears to be unnecessary since the docker compose file pulls it from a gist[x] The entire
config
directory appears to be duplicated intoscripts/
. Probably makes the most sense to rename it tosigma-config/
or justsigma
and adjust the volume mapping in the confluent sigma container. Also add the two files insigma-rules
to this newsigma
folder[x] Create a
connect
folder to put all the connector json files and the Dockerfile for a connect image with the connector plugins installed. Can mount those connector json files into the connect container in the docker compose file. Then in the scripts directory, thescripts/startKafkaConnectComponents.sh
script can be modified to usedocker-compose exec curl
(or just curl) to submit the connectors. For more details about separating build time vs run time concerns for connect, see:[x] Refactor all the "submit" scripts into one script that takes a connector json file as an argument. Maybe this wouldn't even be necessary. Someone can run all the connectors with the script, or one at a time with a
docker-compose exec curl
command from the instructions.[x] Move all splunk folders underneath one
splunk/
and also movedefault.yml
into it.[x] move
duo.yml
into a folder calledduo
and rename the yml file todocker-compose.override.yml
to make it clear what it is