Open vegardw opened 6 years ago
Hi @vegardw,
After investigating this issue, i think we should try to implement a more generic time parsing in order to handle different string representations of time, but also numerical representations (unix_ms or unix_nanos) of time to unix_seconds (the splunk expected format).
We could also optionally allow users to hint the format to the ess command for parsing.
The python dateparser module comes into attention as it handles lots of the string representations out of the box.
What do you think?
Hi @brunotm,
The pull request was what I used for a specific use case I had, but I agree. A more generic version would be better.
I can take a look at how the timeparser module can be used for a more generic solution, and see if I can come up with a proposed solution
One of the dependencies of dateparser is the "regex" module, https://pypi.org/project/regex/ This module isn't pure python, but the regex engine is written in C and compiled to a .so/.dll etc on the different platforms. How can this be handled for elasticsplunk, where all nonstandard python modules is included?
@vegardw you're right, we cannot include it. However, there is also https://github.com/dateutil/dateutil that depends only on package six.
Thank you for tracking this!
When elasticsearch returns timestamp fields, like
@timestamp
from filebeats/logstash etc, they are returned as strings formatted like '2018-08-28T06:55:23.471Z' (UTC time) These strings are displayed OK by elasticsearch, but can't be used by splunk commands liketimechart
, which need the timestamp to be in unix epoch time format.This optional parameter converts the timestamp to this format.
Uses
calendar.timegm()
, since that expects the time as UTC, whiletime.mktime()
expects local time. Concatenate with microseconds part when returning to not loose precision ascalendar.timegm()
returns a integer