Closed ayashjorden closed 9 years ago
Hi @ayashjorden,
I am trying to recreate this, but do not run into this issue
Config
input {
stdin { }
}
output {
elasticsearch {
protocol => "http"
index => "postfix-%{+YYYY.MM.dd}"
manage_template => true
template => "/tmp/logstash-1.5.0.rc2/estemplate.json"
template_name => "postfix"
}
}
.../_template
...
"postfix" : {
"order" : 0,
"template" : "postfix-*",
"settings" : {
"index.number_of_replicas" : "0",
"index.number_of_shards" : "5"
},
"mappings" : {
"postfix" : {
"_source" : {
"enabled" : true
},
"properties" : {
"host" : {
...
.../_mapping
{"postfix-2015.04.17":{"mappings":{"postfix":{"properties":{"environment":{"type":"string"},"file":{"type":"string"},"host":{"type":"string","index":"not_analyzed"},"logsource":{"type":"string","index":"not_analyzed"},"offset":{"type":"long"},"pid":{"type":"integer"},"postfix_delay":{"type":"float"},"postfix_delay_before_qmgr":{"type":"float"},"postfix_delay_conn_setup":{"type":"float"},"postfix_delay_in_qmgr":{"type":"float"},"postfix_delay_transmission":{"type":"float"},"postfix_dsn":{"type":"string","index":"not_analyzed"},"postfix_message-id":{"type":"string","index":"not_analyzed"},"postfix_queueid":{"type":"string"},"postfix_relay_hostname":{"type":"string","index":"not_analyzed"},"postfix_relay_ip":{"type":"ip"},"postfix_relay_port":{"type":"integer"},"postfix_status":{"type":"string"},"postfix_to":{"type":"string"},"program":{"type":"string","index":"not_analyzed"},"type":{"type":"string"}}},"logs":{"properties":{"@timestamp":{"type":"date","format":"dateOptionalTime"},"@version":{"type":"string"},"host":{"type":"string"},"message":{"type":"string"}}}}}}
I get the correct template in this case. any thing else I am missing in your case?
Hi Tal, I noticed that the ES output you configured is a bit different than mine. The index config you posted is has a static index prefix per type: "index => "postfix-%{+YYYY.MM.dd}" whereas the config I posted is dynamic per type %{type}-%{+YYYY.MM.dd}".
So, in order to reproduce the case, please try the output configuration as I posted. (see below for convenience).
if [type] == "postfix" {
elasticsearch {
host => [ "127.0.0.1" ]
protocol => "transport"
index => "%{type}-%{+YYYY.MM.dd}"
manage_template => true
template => "/tmp/logstash-1.5.0.rc2/estemplate.json"
template_name => "postfix"
}
}
I know that the '%{type}' notation can be replaced with a hard-coded value(postfix in our case), but dynamic values should work as well because they're evaluated per event passed in the pipeline.
Suggestion: Maybe in the future the 'template' and 'template_name' could also support dynamic evaluation as this will enable a more concise configuration code.
Thanks, Yarden
Hi Tal, Just checking is I can help with anything on this issue? Any updates?
Thanks, Yarden
Hi Yarden. I looked into this, seems that the wildcard support is meant to replace all field-refs into asterisks, so that "logstash-%{YYYY}" for example, turns into a wildcard match to "logstash-*". I will start work on fixing this
ah, now I remember.
The idea behind this change was so that people can re-use the default logstash template to their own index (not necessarily logstash-*). so the change was made to parse the index provided, and make all fieldrefs into asterisks, and overload the template with that template pattern.
Honestly, its is unclear to me what the full rational was. I do not think the general use-cases of fieldrefs in indices was taken into full consideration.
here is the original ticket: https://github.com/elastic/logstash/issues/1901
I made this fix for when a user is intentionally overriding and managing the template so that the pattern in the template file is expected:
I am still figuring out which param (:manage_template
, or :template_overwrite
) is best to be the trigger signal. but I hope you get the gist of what is going on now!
please, let me know what you think!
Hi Tal, I think that I didn't explain myself clearly. I'll try again. As one would use ES output without template management, he will be able to employ ' index => %{event_field_name}-%{date_pattern}' to send the event to designated index in ES. The change suggested in the gist will have the same effect as the current behaviour (excluding the override flag).
When I supply a template for logstash to use, I already configure the 'template' field, see here.
The issue caused me to open the ticket is that I would expect elasticsearch output plugin to substitute '%{field_name}' for all configuration options. Why is it different for template management scenario?
Output configuration example:
elasticsearch {
host => [ "127.0.0.1" ]
protocol => "transport"
index => "%{type}-%{+YYYY.MM.dd}"
manage_template => true
template => "/etc/logstash/templates/%{type}.json"
template_name => "%{type}"
}
Let's say that an 'nginx' event type is coming, the config will resolve to:
elasticsearch {
host => [ "127.0.0.1" ]
protocol => "transport"
index => "nginx-2015.04.21"
manage_template => true
template => "/etc/logstash/templates/nginx.json"
template_name => "nginx"
}
And so on for all field substitution options. Regarding the date pattern, that one should be replaced with a wild card, so the configuration can identify this kind of things because there is no field named '+YYYY.MM.dd' in the Logstash event object.
I hope that now it is more clear.
Thanks, Yarden
I am starting to feel like the previous "fix" for this leads to lazy behavior that is difficult to implicitly conclude. SO, I am fine with going ahead and reverting the change: https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/115. Now the template
field in the template mapping for declaring the pattern will not be overridden in such a way
Hi Tal, IMHO, this solution is what we need for now as it will give more control to the user and consistent outcome.
What do you think about letting ALL plugin configuration parameters resolve dynamic fields('%{field_name}') ?
Not all plugin configurations apply to specific events, so the event field-referencing does not necessarily always apply. When it does, those config options tend to support the dynamic field referencing.
Got it, So the solution you mentioned(that template 'template' field won't be overridden) will go to next version? when?
@ayashjorden @talevy and I have been discussing this. The problem is that even if we fixed the sprintf string formatting issue, you wouldn't be able to take advantage of the feature you're trying to use.
The reason is that templates are handled once, and only once, right when Logstash fires up, parses the output block, and reads the configuration. Because of this, there's no event data to populate any of your sprintf-formatted strings, which would leave an empty field, or worse, an index template which would match %{type}-*
, which matches nothing, or could cause Elasticsearch errors. If we patched Logstash to push a template change with each potentially new document type--which is what dynamic templating would require--we'd slow the pipeline dramatically.
With Logstash 2.0, we're trying to have an API-based configuration. Dynamic mapping and templating will be much easier to attempt when this happens. Until then, templates are going to be very manual creatures.
As an aside, your idea of multiple indices grouped by type seems logical and simple from the outside, but can have far-reaching and painful consequences due to the increase in shard count per node. Each shard that Elasticsearch has to manage comes at a cost to the index cache portion of the allocated heap space. Active shards will ask for 250M of that heap space, and inactive shards (those not indexed to for over 30 minutes) will ask for 4M each. The default 5+1 index configuration will put 10 active shards across your nodes for each index (I did note that you're not using replicas). For a single node with only 1 index and no replicas, that's 1.25G, which would require that your heap was 16G (resulting in 1.6G of index cache, the default being 10% of the allocated heap). What happens when you have 3, 4, or 5 active indices per day?
Elasticsearch tries to be smart, though. It will begin squeezing the amount of memory each active and inactive shard gets, if it can't get the full 250M or 4M. Once it starts doing so, however, the indexing performance will begin to decline. I've seen this amount compress down to 47M for active shards, and <1M for inactive shards. Once you hit a certain point in this squeezing, though, Elasticsearch will silently stop indexing because it no longer has enough index cache memory left. Those 47M and <1M numbers are dangerously close to the point of "no more indexing."
So, what happens in the end is that you wind up with more indices, but less retention overall because you have to delete them to keep from overloading your nodes. It's either that, or you add a lot more new nodes to spread the load around. Using multiple indices to segment log data by type is sometimes used when different retention levels are desired, or perhaps you are using Shield and are limiting access to some indices. These approaches always come with this caveat. If you do not need these features, it is advisable to not multiply your index and shard counts by separating them by logical types. Elasticsearch really doesn't care if you mix log types within indices, and you can have a single mapping that captures all of those types in one document, rather than multiple.
@ayashjorden I can publish a new version after https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/115 gets merged.
then you will just have to say bin/plugin update logstash-output-elasticsearch
to fetch it
Thanks @talevy for the patience and effort.
@untergeek, your explanation is crystal clear and I'll share it with my team. thanks!
@ayashjorden you can try the update right now. the new version with this reversion has been published to ruby gems v0.2.4
I'll do that 1st thing in the morning.
@untergeek , regarding that ES doesn't really care about multiple types in the same index, I've been searching information about shard size and # of documents. If I'll combine my daily indices into one, this will result in a 90Million+ documents per index(5 shards). is that OK? Any other topology ideas?
Thanks, Yarden
@talevy , getting undefined method
load_runtime_jars!'` error after upgrading to latest elasticsearch output.
Update process:
./logstash/bin/plugin update logstash-output-elasticsearch
Any idea?
@ayashjorden are you upgrading from within logstash 1.5rc2? or 1.5 release?
@talevy 1.5rc2
there were some breaking changes to the core between rc2 and GA, that explains the incompatibility.
On Thu, Jun 4, 2015 at 9:29 AM, Yarden Bar notifications@github.com wrote:
@talevy https://github.com/talevy 1.5rc2
— Reply to this email directly or view it on GitHub https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/110#issuecomment-108957661 .
Thanks, will check.... On Jun 4, 2015 7:51 PM, "Tal Levy" notifications@github.com wrote:
there were some breaking changes to the core between rc2 and GA, that explains the incompatibility.
On Thu, Jun 4, 2015 at 9:29 AM, Yarden Bar notifications@github.com wrote:
@talevy https://github.com/talevy 1.5rc2
— Reply to this email directly or view it on GitHub < https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/110#issuecomment-108957661
.
— Reply to this email directly or view it on GitHub https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/110#issuecomment-108967215 .
can anybody here tell me what grok patterns you used for postfix.
@hacktvist see https://github.com/whyscream/postfix-grok-patterns
@ayashjorden @talevy am I correct in my reading that this issue is resolved? Should we close this guy down, and move an further discussion to https://discuss.elastic.co/c/logstash ?
@andrewvc yeah, I agree.
Hi, I'm trying to let logstash mange templates, with templates.json provided by me. Elasticsearch version is 1.4.4 . LS output config:
Template file example:
When I put the template manually into logstash, the template API return the 'template' field value as "postfix-". on the other hand, when I let logstash push templates to Elasticsearch, the 'template' field value is " * - \ " (without the spaces between the double-quotes, this is only because of syntax issues).
All newly created indices contains ALL type mappings instead of only the required type mapping. Looking in logstash groups and IRC channel didn't reveal anything.
Example for 'curl -XGET 127.0.0.1:9200/_template' :
Example for 'curl -XGET 127.0.0.1:9200/postfix-2015.03.30/_mapping' :
I'd appreciate your advice.
Thanks, Yarden