Closed lneva-fastly closed 2 years ago
Hi,
Sorry no one has gotten back to you on this.
Does the latest version of Telegraf + Splunk still cause this issue? I think we could add this back in, but I would want reference to some documentation saying it is required in case we break other existing users.
Thanks
We haven't tested it recently because we've been pinned to the version that works (1.15.3). I understand your concern about breaking things for other users, and your desire for concrete documentation. The thing is, we don't actually have any official documentation for the advice to drop the "event: metric" field in the first place -- just a note in a support ticket. My testing documented above seems to indicate that that advice is not always correct.
We seem to be in a pretty tough place here. Maybe adding an option to include this field is the way to go?
We tested this again with latest Telegraf version (1.22.0) and Splunk version 8.2.3. The issue still persists. To add to what Lex mentioned above, we are in a very difficult situation here. We can no longer pin to 1.15.3 since it is vulnerable to CVE-2020-26892. Upgrading is absolutely essential at this point but that breaks the forwarders.
@powersj, here's a link to a splunk document showing the expected format of a splunk telegraf metric: https://www.splunk.com/en_us/blog/it/splunk-metrics-via-telegraf.html
This setup will result in metrics that look like:
{ "time": 1529708430, "event": "metric", "host": "patas-mbp", "fields": { "_value": 0.6, "cpu": "cpu0", "dc": "mobile", "metric_name": "cpu.usage_user", "user": "ronnocol" } }
@fastly-ffej thanks for the link! I think this is worth reverting based on that. In 20-30mins after I post this message PR: #11237 should have artifacts attached to it by the telegraf-tiger bot that you can try. Would one of you please give those a shot and ensure the revert works?
Thanks!
@powersj, I just tested the new binary on our systems and it worked like a champ!
Thanks for the swift response!
This should go out in v1.23.0 on or around June 13. It will be available in nightlies starting tomorrow.
Thanks!
Relevant telegraf.conf:
System info:
Telegraf 1.17.0 Splunk 8.0.2 Heavy forwarder Indexer cluster
Steps to reproduce:
Use the splunkmetric data format + http output.
Expected behavior:
Splunk should receive the metrics via HEC with no problems. Worked fine on 1.15.3.
Actual behavior:
Splunk-forwarder hates the events produced. Metrics show up in Splunk, but soon the forwarder gives errors like this:
Soon after that, TcpOutputProc locks up and is unable to send anything to the indexers at all. Worse yet, because we have Splunk's
persistentQueueSize
option set on our HEC input, the problematic events stick around through a restart of the forwarder, even if new problematic events are not arriving. We had to wipe the forwarder out entirely and rebuild it to recover.Additional info:
We carefully pared down variables until we arrived on the problem: the removal of the
"event": "metric"
field in #8039. Starting with a fresh, working forwarder, we can cause the above problems by sending events without the "event" field to the forwarder over HEC usingcurl
. Sending the exact same events with"event": "metric"
does not cause this problem.I'm honestly not at all clear on why Splunk hates these events. I also don't have a good explanation for why Splunk Support said that the "event" field is unnecessary in #8039. Perhaps there's something else in the OP's configuration that obviates the need for the "event" field?
For now, we've reverted to 1.15.3, pending a fix to telegraf. Perhaps the "event" field should be optional, defaulting to present?