sghaskell / kafka-splunk-consumer

PyKafka consumer to push events to Splunk HTTP Event Collector
MIT License
17 stars 8 forks source link

Kafka Consumer For Splunk

Description

A Kafka consumer that implements a pykafka balanced consumer and Python multiprocessing to send messages to Splunk HTTP Event collector tier with scalability, parallelism and high availability in mind.

Compatibility

Dependencies

Optional

Features

Limitations

Installation

$ sudo python setup.py install

Configuration

See comments in the sample YAML file for all available configuration options.

Usage

$ kafka_splunk_consumer -c <config.yml>

Docker Image

Added a Dockerfile based on Alpine Linux with librdkafka and the kafka-splunk-consumer installed. To use, create a config file locally in /path/to/local/configdir, mount it in the image and execute the kafka_splunk_consumer command pointing to the config file.

Build

$ docker build -t sghaskell/kafka-splunk-consumer .

Run

$ docker run -it -v /path/to/local/configdir:/tmp sghaskell/kafka-splunk-consumer kafka_splunk_consumer -c /tmp/kafka_consumer.yml

Deployment Guidance

This script can be run on as many servers as you like to scale out consumption of your Kafka topics. The script uses Python multiprocessing to take advantage of multiple cores. Configure as many instances of the script on as many servers as necessary to scale out consuming large volumes of messages. Do not exceed more workers than cores available for a given server. The number of workers across all your instances of the script should not exceed the number of partitions for a given topic. If you configure more workers than the number of partitions in the topic, you will have idle workers that will never get assigned to consume from a topic.

The splunk HTTP Event Collector should be deployed as a tier of collectors behind a VIP or load balancer. See the links in the Limitations section above for architrecture guidance.

For more information on the specifics of the pykafka balanced consumer and its benefits, see this section of the docs.

If you have a busy topic and you're not getting the throughput you had hoped, consider disabling HTTPS for your HTTP Event Collector tier to see if that speeds up ingest rates. (see use_https)

If you're using Splunk 6.4, I suggest you bump up the max content length for the http input in limits.conf. It is set far too low by default (1MB). I'd bump it up to the default setting in Splunk 6.5 (800MB)

[http_input]
# The max request content length (800MB, to match HTTP server).
max_content_length = 838860800

Bugs & Feature Requests

Please feel free to file bugs or feature requests if something isn't behaving or there's a shortcoming feature-wise.