Closed spingel closed 5 years ago
Hello @spingel , by default Fluentd add an attribute called @timestamp
that is used by the Datadog platform to set the timestamp on the logs.
If you wish to have a different attribute, there are a couple things that you can do which are all explained in this article.
Could you have a look and let me know if that works for you?
Thanks
Hello,
As I did not hear back from you, I'll go ahead and close this issue but feel free to send any question you might have to support@datadoghq.com.
Thanks
Hello,
I have the same issue, and it's easy to reproduce. Here is my configuration:
<source>
@type http
port 8080
<parse>
@type json
time_key timestamps
time_format "%iso8601"
</parse>
</source>
<match **>
@type copy
<store>
@type stdout
</store>
<store>
@type datadog
api_key xxxxx
dd_tags 'test'
</store>
</match>
When I send events with 30 minutes of offset with the following curl:
curl -d "{\"message\":\"Hello World\",\"timestamps\":\"$(date -d '30 minute ago' --iso-8601=second)\"}" http://localhost:8080
In the stdout the times are correct (in UTC), but in Datadog the times are wrong:
The workaround of @spingel works perfectly.
Hi, same issue here. I independently did the same test to reproduce the issue, and then I found that @Erouan50 did the same.
@NBParis, you're not correct that fluentd adds a @timestamp
field by default. The mentioned line of code has tag
, time
from fluentd, and record which is a map. The record doesn't contain
@timestamp`.
As the mentioned workaround, we can add the timestamp as @spingel suggested, but why require this extra step?
As part of our fluentbit configuration we parse the timestamp from log records and then forward log records via fluentd with the Datadog fluent plugin.
We noticed that the timestamps recorded in Datadog are from the time of the aggregation and not equal to the time that was parsed from the source record.
We would like to use the source timestamp instead.
Looking at the code the time is discarded when log records are processed: https://github.com/DataDog/fluent-plugin-datadog/blob/dd665541f1ad1d1572d8cda73a4135c8a9ab9885/lib/fluent/plugin/out_datadog.rb#L89
Work-around
We are using the following fluentd configuration to explicitly add the timestamp to the log record so it can be parsed in the Datadog log pipeline but it would be helpful to have this as the default behavior: