influxdata / influxdb

Scalable datastore for metrics, events, and real-time analytics
https://influxdata.com
Apache License 2.0
28.67k stars 3.54k forks source link

[panic:runtime error: index out of range] Using Telegraf to input data #17409

Open slopee opened 4 years ago

slopee commented 4 years ago

Hello,

I am using docker-compose with influxdb 1.7.10 and latest telegraf. I have made an application that sends information to a rabbitmq queue that telegraf fetches and sends to influxdb. It apparently works fine but after a few inserts I just get a [panic:runtime error: index out of range] when I do a SHOW series on the database.

If I restart Influxdb or I run influx_inspect verify-seriesfile I get no error messages but after doing either of those options and performing a SHOW SERIES again (without adding/removing any data) it works.

What is happening? Is there a way I can get more logs or any idea of why is this happening?

This is the log when inserting and when querying for SHOW SERIES. I should add that frameIndex,platform,deviceId,sessionId,msts are tag keys.

influxdb      | ts=2020-03-24T22:45:30.006711Z lvl=info msg="Write body received by handler" log_id=0LkgI_zG000 service=httpd body="time,deviceId=random-test,frameIndex=125,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 frameDurationMS=19.575100000000003,msts=1585033219585,relativeFrameTime=19585.6529,unityFrameTime=3687153246071 1585033219585000000\ntime,deviceId=random-test,frameIndex=124,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 msts=1585033219562,unityFrameTime=3687153012967,frameDurationMS=23.3104,relativeFrameTime=19562.3425 1585033219562000000\ntime,deviceId=random-test,frameIndex=123,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 msts=1585033219493,unityFrameTime=3687152327308,frameDurationMS=68.5659,relativeFrameTime=19493.7766 1585033219493000000\ntime,deviceId=random-test,frameIndex=122,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 frameDurationMS=22.1025,msts=1585033219471,unityFrameTime=3687152106283,relativeFrameTime=19471.6741 1585033219471000000\ntime,deviceId=random-test,frameIndex=121,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 frameDurationMS=56.154700000000005,relativeFrameTime=19415.5194,unityFrameTime=3687151544736,msts=1585033219415 1585033219415000000\ntime,deviceId=random-test,frameIndex=120,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 unityFrameTime=3687151235015,relativeFrameTime=19384.547300000002,frameDurationMS=30.9721,msts=1585033219384 1585033219384000000\ntime,deviceId=random-test,frameIndex=119,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 unityFrameTime=3687150643034,relativeFrameTime=19325.3492,frameDurationMS=59.1981,msts=1585033219325 1585033219325000000\ntime,deviceId=random-test,frameIndex=118,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 msts=1585033218970,unityFrameTime=3687147097220,frameDurationMS=354.58140000000003,relativeFrameTime=18970.7678 1585033218970000000\ntime,deviceId=random-test,frameIndex=117,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 frameDurationMS=329.2543,msts=1585033218641,unityFrameTime=3687143804677,relativeFrameTime=18641.5135 1585033218641000000\ntime,deviceId=random-test,frameIndex=116,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 frameDurationMS=44.1638,unityFrameTime=3687143363039,relativeFrameTime=18597.3497,msts=1585033218597 1585033218597000000\ntime,deviceId=random-test,frameIndex=115,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 unityFrameTime=3687140313404,msts=1585033218292,relativeFrameTime=18292.3862,frameDurationMS=304.9635 1585033218292000000\ntime,deviceId=random-test,frameIndex=114,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 msts=1585033218218,unityFrameTime=3687139574124,frameDurationMS=73.928,relativeFrameTime=18218.4582 1585033218218000000\ntime,deviceId=random-test,frameIndex=113,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 relativeFrameTime=18132.6195,msts=1585033218132,unityFrameTime=3687138715737,frameDurationMS=85.8387 1585033218132000000\ntime,deviceId=random-test,frameIndex=112,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 relativeFrameTime=18099.5962,msts=1585033218099,frameDurationMS=33.023300000000006,unityFrameTime=3687138385504 1585033218099000000\ntime,deviceId=random-test,frameIndex=111,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 unityFrameTime=3687138185986,relativeFrameTime=18079.6444,msts=1585033218079,frameDurationMS=19.9518 1585033218079000000\ntime,deviceId=random-test,frameIndex=110,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 relativeFrameTime=17748.0416,msts=1585033217748,unityFrameTime=3687134869958,frameDurationMS=331.6028 1585033217748000000\ntime,deviceId=random-test,frameIndex=109,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 unityFrameTime=3687131599410,frameDurationMS=327.0548,msts=1585033217420,relativeFrameTime=17420.986800000002 1585033217420000000\ntime,deviceId=random-test,frameIndex=108,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 msts=1585033217082,unityFrameTime=3687128212934,relativeFrameTime=17082.3392,frameDurationMS=338.64760000000007 1585033217082000000\ntime,deviceId=random-test,frameIndex=107,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 frameDurationMS=331.1688,relativeFrameTime=16751.1704,msts=1585033216751,unityFrameTime=3687124901246 1585033216751000000\ntime,deviceId=random-test,frameIndex=106,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 relativeFrameTime=16380.272500000001,unityFrameTime=3687121192267,msts=1585033216380,frameDurationMS=370.89790000000005 1585033216380000000\ntime,deviceId=random-test,frameIndex=105,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 frameDurationMS=111.7133,relativeFrameTime=16268.559200000002,unityFrameTime=3687120075134,msts=1585033216268 1585033216268000000\ntime,deviceId=random-test,frameIndex=104,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 msts=1585033216165,frameDurationMS=103.31160000000001,relativeFrameTime=16165.2476,unityFrameTime=3687119042018 1585033216165000000\ntime,deviceId=random-test,frameIndex=103,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 msts=1585033216055,relativeFrameTime=16055.6896,unityFrameTime=3687117946438,frameDurationMS=109.558 1585033216055000000\ntime,deviceId=random-test,frameIndex=102,host=7890980642ec,platform=WIN-2,sessionId=15:44:57\\ -\\ 03/24/2020 msts=1585033216002,relativeFrameTime=16002.5467,unityFrameTime=3687117415009,frameDurationMS=53.142900000000004 1585033216002000000\n"
influxdb      | [httpd] 172.22.0.6 - telegraf [24/Mar/2020:22:45:30 +0000] "POST /write?db=hs_performance HTTP/1.1" 204 0 "-" "Telegraf/1.13.4" 29a7dbed-6e21-11ea-8010-0242ac160002 4085
telegraf      | 2020-03-24T22:45:30Z D! [outputs.influxdb] Wrote batch of 24 metrics in 4.6148ms
telegraf      | 2020-03-24T22:45:30Z D! [outputs.influxdb] Buffer fullness: 0 / 10000 metrics
chronograf    | time="2020-03-24T22:45:37Z" level=info msg="Response: OK" component=server method=GET remote_addr="172.22.0.1:41130" response_time="45µs" status=200
telegraf      | 2020-03-24T22:45:40Z D! [outputs.influxdb] Buffer fullness: 0 / 10000 metrics
influxdb      | ts=2020-03-24T22:45:45.346152Z lvl=info msg="Executing query" log_id=0LkgI_zG000 service=query query="SELECT \"key\" FROM hs_performance.autogen._series"
influxdb      | ts=2020-03-24T22:45:45.346683Z lvl=error msg="SHOW SERIES [panic:runtime error: index out of range] goroutine 190 [running]:\nruntime/debug.Stack(0xc00072f9c0, 0x1, 0x1)\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0x9d\ngithub.com/influxdata/influxdb/query.(*Executor).recover(0xc0004b3ad0, 0xc0007c4c00, 0xc00008d4a0)\n\t/go/src/github.com/influxdata/influxdb/query/executor.go:394 +0xc2\npanic(0x1329c80, 0x2db4da0)\n\t/usr/local/go/src/runtime/panic.go:522 +0x1b5\nencoding/binary.bigEndian.Uint16(...)\n\t/usr/local/go/src/encoding/binary/binary.go:100\ngithub.com/influxdata/influxdb/tsdb.ReadSeriesKeyMeasurement(...)\n\t/go/src/github.com/influxdata/influxdb/tsdb/series_file.go:344\ngithub.com/influxdata/influxdb/tsdb.CompareSeriesKeys(0x7fd1b28650f0, 0x1, 0x3fef10, 0x7fd1b1864d77, 0x1, 0x3ff289, 0x1)\n\t/go/src/github.com/influxdata/influxdb/tsdb/series_file.go:416 +0x820\ngithub.com/influxdata/influxdb/tsdb.seriesKeys.Less(...)\n\t/go/src/github.com/influxdata/influxdb/tsdb/series_file.go:493\nsort.medianOfThree(0x214fe40, 0xc000e418a0, 0xfc, 0xdd, 0xbe)\n\t/usr/local/go/src/sort/sort.go:76 +0x49\nsort.doPivot(0x214fe40, 0xc000e418a0, 0x0, 0xfd, 0x314a500, 0x7fd1bbb756d0)\n\t/usr/local/go/src/sort/sort.go:103 +0x642\nsort.quickSort(0x214fe40, 0xc000e418a0, 0x0, 0xfd, 0x10)\n\t/usr/local/go/src/sort/sort.go:190 +0x9a\nsort.Sort(0x214fe40, 0xc000e418a0)\n\t/usr/local/go/src/sort/sort.go:218 +0x79\ngithub.com/influxdata/influxdb/tsdb.(*seriesPointIterator).readSeriesKeys(0xc00000d680, 0xc000513d00, 0x4, 0x8, 0x0, 0x0)\n\t/go/src/github.com/influxdata/influxdb/tsdb/index.go:904 +0x397\ngithub.com/influxdata/influxdb/tsdb.(*seriesPointIterator).Next(0xc00000d680, 0xc000048570, 0xc000048500, 0xc001165e00)\n\t/go/src/github.com/influxdata/influxdb/tsdb/index.go:836 +0x43a\ngithub.com/influxdata/influxdb/query.(*floatInterruptIterator).Next(0xc000e41780, 0x42eaa1, 0x1f72150, 0xc000a674d0)\n\t/go/src/github.com/influxdata/influxdb/query/iterator.gen.go:941 +0x48\ngithub.com/influxdata/influxdb/query.(*floatFastDedupeIterator).Next(0xc000e417a0, 0x1340060, 0x13e0a60, 0xc000388e00)\n\t/go/src/github.com/influxdata/influxdb/query/iterator.go:1302 +0x48\ngithub.com/influxdata/influxdb/query.(*bufFloatIterator).Next(...)\n\t/go/src/github.com/influxdata/influxdb/query/iterator.gen.go:90\ngithub.com/influxdata/influxdb/query.(*bufFloatIterator).peek(0xc000e417e0, 0xc000e416c0, 0x1, 0x1)\n\t/go/src/github.com/influxdata/influxdb/query/iterator.gen.go:65 +0xb9\ngithub.com/influxdata/influxdb/query.(*floatIteratorScanner).Peek(0xc0008a9dc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc001165dd0)\n\t/go/src/github.com/influxdata/influxdb/query/iterator.gen.go:516 +0x3d\ngithub.com/influxdata/influxdb/query.(*scannerCursor).scan(0xc000302d80, 0xc00033c090, 0x203000, 0x0, 0x10101, 0x0, 0xc000187080, 0x2158700)\n\t/go/src/github.com/influxdata/influxdb/query/cursor.go:241 +0x3a\ngithub.com/influxdata/influxdb/query.(*scannerCursorBase).Scan(0xc000302d90, 0xc0006ff630, 0xc000048500)\n\t/go/src/github.com/influxdata/influxdb/query/cursor.go:175 +0x48\ngithub.com/influxdata/influxdb/query.(*Emitter).Emit(0xc000388e70, 0x1f6da78, 0xc000388e70, 0xc000388e70, 0xc0000450ac)\n\t/go/src/github.com/influxdata/influxdb/query/emitter.go:41 +0x68\ngithub.com/influxdata/influxdb/coordinator.(*StatementExecutor).executeSelectStatement(0xc000389570, 0xc0006b8d00, 0xc0001840c0, 0x0, 0x0)\n\t/go/src/github.com/influxdata/influxdb/coordinator/statement_executor.go:561 +0x18b\ngithub.com/influxdata/influxdb/coordinator.(*StatementExecutor).ExecuteStatement(0xc000389570, 0x2157c40, 0xc0006b8d00, 0xc0001840c0, 0x1, 0x1)\n\t/go/src/github.com/influxdata/influxdb/coordinator/statement_executor.go:64 +0x38d5\ngithub.com/influxdata/influxdb/query.(*Executor).executeQuery(0xc0004b3ad0, 0xc0007c4c00, 0xc0000450ac, 0xe, 0x0, 0x0, 0x2158700, 0x3166590, 0x2710, 0x0, ...)\n\t/go/src/github.com/influxdata/influxdb/query/executor.go:334 +0x34e\ncreated by github.com/influxdata/influxdb/query.(*Executor).ExecuteQuery\n\t/go/src/github.com/influxdata/influxdb/query/executor.go:236 +0xc9\n" log_id=0LkgI_zG000 service=query
russorat commented 4 years ago

@slopee thanks for the issue. Could you add your docker compose file as well as your system information? Since we don't have other reports of a panic like this, it likely points to something in your environment or setup that is causing problems. That info will help us investigate.

dgnorton commented 4 years ago

Cleaned up the stack trace:

panic:runtime error: index out of range

goroutine 190 [running]:
runtime/debug.Stack(0xc00072f9c0, 0x1, 0x1)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
github.com/influxdata/influxdb/query.(*Executor).recover(0xc0004b3ad0, 0xc0007c4c00, 0xc00008d4a0)
        /go/src/github.com/influxdata/influxdb/query/executor.go:394 +0xc2
panic(0x1329c80, 0x2db4da0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
encoding/binary.bigEndian.Uint16(...)
        /usr/local/go/src/encoding/binary/binary.go:100
github.com/influxdata/influxdb/tsdb.ReadSeriesKeyMeasurement(...)
        /go/src/github.com/influxdata/influxdb/tsdb/series_file.go:344
github.com/influxdata/influxdb/tsdb.CompareSeriesKeys(0x7fd1b28650f0, 0x1, 0x3fef10, 0x7fd1b1864d77, 0x1, 0x3ff289, 0x1)
        /go/src/github.com/influxdata/influxdb/tsdb/series_file.go:416 +0x820
github.com/influxdata/influxdb/tsdb.seriesKeys.Less(...)
        /go/src/github.com/influxdata/influxdb/tsdb/series_file.go:493
sort.medianOfThree(0x214fe40, 0xc000e418a0, 0xfc, 0xdd, 0xbe)
        /usr/local/go/src/sort/sort.go:76 +0x49
sort.doPivot(0x214fe40, 0xc000e418a0, 0x0, 0xfd, 0x314a500, 0x7fd1bbb756d0)
        /usr/local/go/src/sort/sort.go:103 +0x642
sort.quickSort(0x214fe40, 0xc000e418a0, 0x0, 0xfd, 0x10)
        /usr/local/go/src/sort/sort.go:190 +0x9a
sort.Sort(0x214fe40, 0xc000e418a0)
        /usr/local/go/src/sort/sort.go:218 +0x79
github.com/influxdata/influxdb/tsdb.(*seriesPointIterator).readSeriesKeys(0xc00000d680, 0xc000513d00, 0x4, 0x8, 0x0, 0x0)
        /go/src/github.com/influxdata/influxdb/tsdb/index.go:904 +0x397
github.com/influxdata/influxdb/tsdb.(*seriesPointIterator).Next(0xc00000d680, 0xc000048570, 0xc000048500, 0xc001165e00)
        /go/src/github.com/influxdata/influxdb/tsdb/index.go:836 +0x43a
github.com/influxdata/influxdb/query.(*floatInterruptIterator).Next(0xc000e41780, 0x42eaa1, 0x1f72150, 0xc000a674d0)
        /go/src/github.com/influxdata/influxdb/query/iterator.gen.go:941 +0x48
github.com/influxdata/influxdb/query.(*floatFastDedupeIterator).Next(0xc000e417a0, 0x1340060, 0x13e0a60, 0xc000388e00)
        /go/src/github.com/influxdata/influxdb/query/iterator.go:1302 +0x48
github.com/influxdata/influxdb/query.(*bufFloatIterator).Next(...)
        /go/src/github.com/influxdata/influxdb/query/iterator.gen.go:90
github.com/influxdata/influxdb/query.(*bufFloatIterator).peek(0xc000e417e0, 0xc000e416c0, 0x1, 0x1)
        /go/src/github.com/influxdata/influxdb/query/iterator.gen.go:65 +0xb9
github.com/influxdata/influxdb/query.(*floatIteratorScanner).Peek(0xc0008a9dc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc001165dd0)
        /go/src/github.com/influxdata/influxdb/query/iterator.gen.go:516 +0x3d
github.com/influxdata/influxdb/query.(*scannerCursor).scan(0xc000302d80, 0xc00033c090, 0x203000, 0x0, 0x10101, 0x0, 0xc000187080, 0x2158700)
        /go/src/github.com/influxdata/influxdb/query/cursor.go:241 +0x3a
github.com/influxdata/influxdb/query.(*scannerCursorBase).Scan(0xc000302d90, 0xc0006ff630, 0xc000048500)
        /go/src/github.com/influxdata/influxdb/query/cursor.go:175 +0x48
github.com/influxdata/influxdb/query.(*Emitter).Emit(0xc000388e70, 0x1f6da78, 0xc000388e70, 0xc000388e70, 0xc0000450ac)
        /go/src/github.com/influxdata/influxdb/query/emitter.go:41 +0x68
github.com/influxdata/influxdb/coordinator.(*StatementExecutor).executeSelectStatement(0xc000389570, 0xc0006b8d00, 0xc0001840c0, 0x0, 0x0)
        /go/src/github.com/influxdata/influxdb/coordinator/statement_executor.go:561 +0x18b
github.com/influxdata/influxdb/coordinator.(*StatementExecutor).ExecuteStatement(0xc000389570, 0x2157c40, 0xc0006b8d00, 0xc0001840c0, 0x1, 0x1)
        /go/src/github.com/influxdata/influxdb/coordinator/statement_executor.go:64 +0x38d5
github.com/influxdata/influxdb/query.(*Executor).executeQuery(0xc0004b3ad0, 0xc0007c4c00, 0xc0000450ac, 0xe, 0x0, 0x0, 0x2158700, 0x3166590, 0x2710, 0x0, ...)
        /go/src/github.com/influxdata/influxdb/query/executor.go:334 +0x34e
created by github.com/influxdata/influxdb/query.(*Executor).ExecuteQuery
        /go/src/github.com/influxdata/influxdb/query/executor.go:236 +0xc9
slopee commented 4 years ago

Sure thing! It's pretty simple so far, I've created a repo with the whole docker-compose so it should be as simple as cloning and you should be set :) It can be found here: https://github.com/slopee/influxdb_issue

As per system, docker is running on a Windows 10 Pro and the influxdb container has this system:

root@07d5ef33163d:/# uname -srm
Linux 4.9.184-linuxkit x86_64
slopee commented 4 years ago

Adding also Docker Desktop version: 2.1.0.4 (39773).

I might have some difficulties providing a script to do the whole flow but the way it works is:

  1. C++ Code emits message to RabbitMQ queue in JSON format.
  2. Telegraf AMQP Consumer reads the queue and parses the JSON
  3. Telegraf inputs data to InfluxDB v1.

As per the JSON format, I attach a few JSON with the input that Telegraf will be receiving. Each line would be one entry on a RabbitMQ queue. json_files.txt

slopee commented 4 years ago

Is there anything I can do to add more logs/troubelshoot what might be causing the issue? Seems like the exact same input of data can cause the issue in, what feels, 50% of the situations.

russorat commented 4 years ago

@slopee so based on the telegraf config provided in that repo, this is how your json data is being transformed into line protocol:

time,deviceId=albert-test,frameIndex=25,host=rsavage.lan,msts=1585119609368,path=./sample_data.txt,platform=WIN-2,sessionId=10:55:14\ -\ 03/25/2020 frameDurationMS=42.621,relativeFrameTime=9368.9386,unityFrameTime=4377240513470 1585119609368000000

Looks like you've got some tags that will cause a lot of cardinality in the DB. I recommend moving msts, frameIndex, and sessionId to be fields.

here's the updated telegraf config for json parsing:

  data_format = "json"
  json_name_key = "name"
  tag_keys = ["platform","deviceId"]
  json_string_fields = ["platform","deviceId","sessionId"]
  json_time_key = "timestamp"
  json_time_format = "unix_ms"

Which will produce something like this:

time,deviceId=albert-test,host=rsavage.lan,path=./sample_data.txt,platform=WIN-2 relativeFrameTime=9368.9386,unityFrameTime=4377240513470,msts=1585119609368,sessionId="10:55:14 - 03/25/2020",frameDurationMS=42.621,frameIndex=25 1585119609368000000

This doesn't solve your issue, but it might work around it until we can take a look.

slopee commented 4 years ago

I just tested changing those values and so far it seems to work,

I agree that I was causing a lot of cardinalities and that msts, frameIndex and sessionId were not needed as tags, thank you for the suggestion!

russorat commented 4 years ago

great to hear!

dilips86 commented 2 years ago

Hi, I have a telegraf(in docker container in Kubernetes) based service sending continuous stream of metrics to Wavefront through wavefront proxy.

The telegraf service getting crashed often with the below error and stop sending metrics to wavefront.

We are pulling the latest version of telegraf taz in our service docker file as below:

RUN curl --output telegraf.tar.gz https://dl.influxdata.com/telegraf/releases/telegraf-1.20.4_linux_amd64.tar.gz RUN tar --strip-components=2 -C / -xvvf telegraf.tar.gz

Telegraf version is telegraf-1.20.4

panic: runtime error: index out of range [0] with length 0

goroutine 354 [running]: github.com/wavefronthq/wavefront-sdk-go/senders.sanitizeInternal({0x0, 0x0}) /go/pkg/mod/github.com/wavefronthq/wavefront-sdk-go@v0.9.7/senders/formatter.go:340 +0x2d5 github.com/wavefronthq/wavefront-sdk-go/senders.MetricLine({0xc00d09eb20, 0xc00b3f9b00}, 0xc00b3f9b00, 0x61ccb037, {0xc00db72900, 0x24}, 0x6000103, {0xc00083c000, 0x2a}) /go/pkg/mod/github.com/wavefronthq/wavefront-sdk-go@v0.9.7/senders/formatter.go:56 +0x3ae github.com/wavefronthq/wavefront-sdk-go/senders.(directSender).SendMetric(0xc00081a0f0, {0xc00d09eb20, 0xc017249e30}, 0x40ead4, 0x0, {0xc00db72900, 0xfa00}, 0x0) /go/pkg/mod/github.com/wavefronthq/wavefront-sdk-go@v0.9.7/senders/direct.go:84 +0x48 github.com/influxdata/telegraf/plugins/outputs/wavefront.(Wavefront).Write(0xc0007bf550, {0xc03e15e000, 0xfa0, 0x0}) /go/src/github.com/influxdata/telegraf/plugins/outputs/wavefront/wavefront.go:172 +0x1dc github.com/influxdata/telegraf/models.(RunningOutput).write(0xc0002bd880, {0xc03e15e000, 0xfa0, 0xfa0}) /go/src/github.com/influxdata/telegraf/models/running_output.go:244 +0x118 github.com/influxdata/telegraf/models.(RunningOutput).WriteBatch(0xc0002bd880) /go/src/github.com/influxdata/telegraf/models/running_output.go:218 +0x58 github.com/influxdata/telegraf/agent.(Agent).flushOnce.func1() /go/src/github.com/influxdata/telegraf/agent/agent.go:829 +0x29 created by github.com/influxdata/telegraf/agent.(Agent).flushOnce /go/src/github.com/influxdata/telegraf/agent/agent.go:828 +0xb8 Note: Also this error is not resolved even with latest telegraf version 1.21.1

It’s the check which is done for tilde in https://github.com/wavefrontHQ/wavefront-sdk-go/blob/fb604f0e621b07590430f02d07eb85b86c69917a/senders/formatter.go#L343. It’s because of possible empty or null point tag keys. If you see the function func MetricLine there is no null/empty check for point tag keys. When passed on to sanitizeInternal () function it throws index out of range error. Also we don’t see any metrics we ship from our services have tilde prefix in metricName.

Please help us to resolve this issue.