Open ChrisBulleri opened 2 years ago
I'll take this one. Will work in https://github.com/grantcurell/iDRAC-Telemetry-Reference-Tools/tree/49-docker-composeyml
better: space separated list of URLs.
SPLUNK_URL: "http://server1 http://server2 http://server3"
Remember, this is set as an environment variable, so we lose the expressiveness of the YAML data types.
Actually, rethinking the above.
one splunk pump == one splunk server
If you have multiple splunk servers, run multiple splunk pumps. The SPLUNK_URL would be set individually per pump to include one server.
Define a "splunk server" if you are assuming that everyone has a single Splunk enterprise deployment that would be inconsistent with a distrubuted deployment.
Define a "splunk server" if you are assuming that everyone has a single Splunk enterprise deployment that would be inconsistent with a distrubuted deployment.
He means for every separate running Splunk process (server) there will be a distinct Splunkpump container taking care of that splunk process. Ex: if you want to send to three separate splunk servers you'll have three splunkpump containers
If I follow then because I have 9 splunk servers that ingest data that I would need 9 containers running. Each of those containters wold send to one of those server? I would also have 9 Splunkpump containers that would need to get data? An "indexer is not a splunk enterprise setup" A single enterprise setup consists of many parts.
Correct. This development is one of the reasons I pushed for a queue system. You would have 9 containers each of which can independently dequeue from ActiveMQ which will track which telemetry events have or haven't been dequeued. This allows the 9 splunkpump process to efficiently pull events and push them to splunk without inadvertent duplication.
seems alot more complex than just allowing one splunkpump to talk to multiple splunk-indexers. It just sends to one of them and splunk handles the rest.
Implementation-wise it is actually more straightforward and more importantly it allows you to achieve parallelization and follow best practice for container design. The problem with running one splunkpump process is that it does not scale. You have a single thread handling everything. This is acceptable for small deployments but when planning for hyperscale it would present issues. This way, you have 9 separate processes that can each independently pull from the queue system.
Alternatively we could implement all this logic into multithreading in splunkpump and each thread would be a consumer pulling from ActiveMQ but aside from being a more complicated and error-prone software design this also violates container engineering principals. The whole point of containers is being able to spawn or despawn them on demand. Ideally, if this project continues to grow, there could be a click button interface that basically says "add more" and it just adds more ActiveMQ instances and splunkpump instances. Containers are meant for exactly this use case. (Ex: This is a huge part of Kubernetes) You could also do it with threading but that's uglier and more complicated - you'd have to implement the logic in Go to listen for commands that would dynamically increase or decrease the number of threads. Perfectly doable but again, containers are meant for exactly this use case so there's not a lot of reason to do that.
From a user perspective there also isn't much difference. Either direction is a straightforward docker-compose update. Moreover, using this method sets it up so in the future the project could move toward further automating the deployment using something like a jinja template for the docker file which could pull config data from configui (and listen for dynamic updates).
The last point is that this allows for proper container scaling. If we are planning for the long run this also means planning for containers being able to distribute across multiple hosts. Running multiple threads can't scale across hosts where as multiple containers using a backend container network with vxlan (IE what Kubernetes does) can. I'm not sure the project will or won't reach this point but we're planning like it could.
splunkpump != the whole setup.. just the part that is sending.
Correct - just splunkpump scales. Nothing else would be replicated
@ChrisBulleri
Thinking about what might be a "fancier" solution but scaling them horizontally right now is as simple as copy-pasting the definition of splunkpump, changing the service name, and updating the environment variables. Ex:
splunk-pump-standalone:
############################################################################
# SPLUNK PUMP - data pump to pump telemetry into splunk
# Manually start this profile if you want to point to an external server
#
# Add this to your docker-compose cli args:
# --profile splunk-pump
#
# If you want to connect to an external splunk database,
# set the following environment variables:
#
# TODO: add
#
############################################################################
<<: *refdaemon
image: idrac-telemetry-reference-tools/splunkpump:latest
profiles:
- splunk-pump
environment:
<<: *messagebus-env
SPLUNK_URL: "http://192.168.1.5:8088"
SPLUNK_KEY: "87b52214-1950-4b22-8fd7-f57543431b81"
build:
<<: *base-build
args:
<<: *base-args
CMD: splunkpump
splunk-pump-standalone2:
############################################################################
# SPLUNK PUMP - data pump to pump telemetry into splunk
# Manually start this profile if you want to point to an external server
#
# Add this to your docker-compose cli args:
# --profile splunk-pump
#
# If you want to connect to an external splunk database,
# set the following environment variables:
#
# TODO: add
#
############################################################################
<<: *refdaemon
image: idrac-telemetry-reference-tools/splunkpump:latest
profiles:
- splunk-pump
environment:
<<: *messagebus-env
SPLUNK_URL: "http://other_splunk_server:8088"
SPLUNK_KEY: "other_API_key"
build:
<<: *base-build
args:
<<: *base-args
CMD: splunkpump
Here are two instances running side by side. You can see that they do not dequeue the same things.
@superchalupa
Just thinking this through if we wanted to be a bit more sophisticated what would need to be done.
Assumption: we want an interim step using the current configui without a rewrite into Vue (or other framework)
Thoughts?
Hey Grant, catching up on the conversation I am pleased to see the long and thoughtful conversation here. I think you hit all the correct points and am happy with the direction so far.
I'm not opposed to a configui addition for the above. It sounds like it's needed. I am somewhat intrigued by the idea of the configui process (or other) talking to docker to spin up pumps and configure them. That seems like a great idea to me. (one more thing to move over to a 'better' ui later, but that's NBD for now.
There is a value for SPLUNK_URL
This looks to only be a single value. Can a change be made so that multiple values for SPLUNK_URL can be enabled. In many Splunk deployments there are multiple endpoints so as not to overwhelm one. All the endpoints do communicate with each other so the data only needs to send to one. Maybe sending "round robin" or some other way would be fantastic.
Example:
SPLUNK_URL: "http://splunk-index01:8088/","http://splunk-index02:8088/","http://splunk-index03:8088/","http://splunk-index04:8088/","http://splunk-index05:8088/","http://splunk-index06:8088/"