Open duylong opened 9 years ago
Precision is not kept because Elasticsearch can currently only resolve millisecond time. A way to increase this is being developed and will likely be available starting with Elasticsearch 2.0. Logstash will allow greater precision sometime after that.
Logstash does not use necessarily Elasticsearch output. We could not use µs by default and force ms in ElasticSearch output ?
@untergeek ES 2.0 has been released last week, but I'm having a hard time figuring out if it supports sub millisecond precision. Do you happen to know if it does?
Elasticsearch 2.0 has the same date/time precision as 1.x (millisecond) for date-related queries and the date type. If you need higher precision, you can use a long in Elasticsearch and store whatever value you wish (in a 64bit value)
Thanks for confirming it.
I ended up converting the ISO8601 timestamp sent by rsyslog into a microseconds since epoch long value.
ruby {
code => "
t = DateTime.parse(event['syslog5424_ts'])
micros = t.strftime('%s%6N')
event['micros'] = micros.to_i()
"
}
Hi,
In the method "LogStash::Timestamp.to_iso8601", it's interesting to add "ISO8601_PRECISION" as parameter. We can already do this with the basic function iso8601 in Ruby:
Best regards,