Open piequi opened 3 years ago
The solution is to run logstash cloudwatch input on a single logstash instance/pipeline. There's a few input sources that have this problem.
When multiple logstash pipelines are pulling the same data they will all receive it and duplicate it.
Hi !
First of all, that's a nice and practical plugin you made here !
It appears that with the
sincedb
written to the filesystem of the logstash instance, it makes difficult to have multiple instances running at the same time and sharing thenext_token
value for a given log group stream.What's more, considering that we may run logstash in containers, and that a container is to be replaceable, every time a new container is spawn, it will start with an empty
sincedb
.How would you address this without considering mounting a shared volume able to manage concurrent writes ? Would dynamoDB storage be feasible ?
Thank for sharing your ideas.