Open garyhodgson opened 7 years ago
I just noticed this bug as well. It is quite pronounced for us because Canberra is UTC + 11. I came in to work this morning (1st of Nov) and couldn't find any errors to indicate why all of the analytics systems showed no data, but it is because so far this morning all of the metrics are still recorded with a @timestamp of 31st of Oct in UTC. However the reporter is writing them into the 2018-11 index because it is choosing the index name based on local time instead of UTC time.
The 'bug' is that it should align the timezones with the indexes they are put into, instead of writing October timestamps into the November index.
Hi,
Because the reporter assumes that the date used in the index is the local date, rather than UTC (which ES presumes) we find that our metrics are being split over two indices (we are storing metrics in daily indices and are in CET (GMT+1) Timezone).
From the code I see that the reporter derives the index name from a Date object, and the SimpleDateFormat uses the local Timezone by default.
This means that at midnight Jan 25 CET the reporter send metrics to a new Jan 25 index, whereas the metrics are still timestamped Jan 24 UTC.
Setting the Timezone of the SimpleDateFormat resolves the problem. The following snippet hopefully helps to explain.
To clarify why this is a problem for us (besides the ES indexes being UTC oriented), we run an aggregator on each index and then can simply delete the entire index. As it stands, an hours worth of values from the next day are missing, and are then bundled into the subsequent run. Furthermore, the logic in this reporter does not match how logstash allocates documents to indices, i.e. 00:00->23:59 UTC in one index.
Changing the behaviour of the reporter now might not be desirable, but adding the option to choose between UTC index dates or local would be very useful, or at least mentioning it in the documentation might help, in case it trips someone else up.
I'll happily create a Pull Request if you wish.