Closed wioxjk closed 2 years ago
I remember having an issue like that. I solved it by increasing max shards AND reducing the number of shards per index. Since I can easily fit a days logs into an index under 10MB, I have each index using 1 shard and 1 replica. That has worked very well for me.
I remember having an issue like that. I solved it by increasing max shards AND reducing the number of shards per index. Since I can easily fit a days logs into an index under 10MB, I have each index using 1 shard and 1 replica. That has worked very well for me.
Thanks for the tip! I will defintely try to reduce the number of shards per index - the server shuffles between 15 000 and 50 000 mails per day, and I think that one shard per index would be enough.
Also, you can change the settings for your indexes and then re-index the existing indexes to drop your shard count.
Hi! Newbie on Elasticsearch here! I have a small environment up and running, with logging to a elasticsearch server with Kibana. It works great! The amount of detail I can get is amazing!
A couple of days ago, the Haraka-server was unable to send logs to Elasticsearch due to "shards being full" - I increased the shards on the Elasticsearch server as a temporary fix, but I am looking into purging old data instead. And I am unable to find out how to do it correctly.
I have installed the elasticsearch curator, and created the following files: .curator/curator.yml:
.curator/delete.yml:
Running
curator ~/.curator/delete.yml
results in the following:ERROR Unable to complete action "close". No actionable items in list: <class 'curator.exceptions.NoIndices'>
And changing
disable_action: false
todisable_action: True
in delete.yml result in the following:ERROR Unable to complete action "delete_indices". No actionable items in list: <class 'curator.exceptions.NoIndices'>
I guess there is some other way to do it with the Haraka-index? Or am I completely out and walking on the wrong path?
Thankful for any pointers, or information about how you solves that issue with curating the Elasticsearch environment :)