Open wolffcm opened 5 years ago
Experiencing the same issue with influxdb
when running a task.
I get:
2019-07-03T14:50:15.701517Z info Dispatcher panic {"log_id": "0GPu17TG000", "service": "storage-reads", "component": "dispatcher", "error": "runtime error: index out of range"}
goroutine 75118 [running]:
runtime/debug.Stack(0xc01686ce40, 0x2867e00, 0x242f628)
/usr/local/Cellar/go/1.12.6/libexec/src/runtime/debug/stack.go:24 +0x9d
github.com/influxdata/flux/execute.(*poolDispatcher).Start.func1.1(0xc01686cea0)
/Users/georgemac/go/pkg/mod/github.com/influxdata/flux@v0.31.1/execute/dispatcher.go:75 +0x270
panic(0x21ec640, 0x3d1fa70)
/usr/local/Cellar/go/1.12.6/libexec/src/runtime/panic.go:522 +0x1b5
github.com/apache/arrow/go/arrow/array.(*Int64).Value(...)
/Users/georgemac/go/pkg/mod/github.com/apache/arrow/go/arrow@v0.0.0-20190426170622-338c62a2a205/array/numeric.gen.go:41
github.com/influxdata/flux/stdlib/universe.(*fixedWindowTransformation).Process.func1(0x28ba600, 0xc0168f1770, 0x0, 0x0)
/Users/georgemac/go/pkg/mod/github.com/influxdata/flux@v0.31.1/stdlib/universe/window.go:309 +0x5e3
github.com/influxdata/influxdb/storage/reads.(*integerTable).Do(0xc011c126c0, 0xc016cc5580, 0x0, 0x0)
/Users/georgemac/github/influxdb/storage/reads/table.gen.go:301 +0xc3
github.com/influxdata/flux/stdlib/universe.(*fixedWindowTransformation).Process(0xc011d56820, 0x2a5004a6b8f3226f, 0xf9470924544f98b5, 0x7bdf5e8, 0xc011c126c0, 0xc001635601, 0x10412a3)
/Users/georgemac/go/pkg/mod/github.com/influxdata/flux@v0.31.1/stdlib/universe/window.go:306 +0x981
github.com/influxdata/flux/execute.processMessage(0x28a20e0, 0xc011d56820, 0x28803e0, 0xc011d4f040, 0xc001635768, 0xc01686cd80, 0xc01686cd80)
/Users/georgemac/go/pkg/mod/github.com/influxdata/flux@v0.31.1/execute/transport.go:199 +0x1e4
github.com/influxdata/flux/execute.(*consecutiveTransport).processMessages(0xc01686cf00, 0xa)
/Users/georgemac/go/pkg/mod/github.com/influxdata/flux@v0.31.1/execute/transport.go:156 +0xa1
github.com/influxdata/flux/execute.(*poolDispatcher).run(0xc01686cea0, 0x289b160, 0xc014c16b80)
/Users/georgemac/go/pkg/mod/github.com/influxdata/flux@v0.31.1/execute/dispatcher.go:126 +0x4b
github.com/influxdata/flux/execute.(*poolDispatcher).Start.func1(0xc01686cea0, 0x289b160, 0xc014c16b80)
/Users/georgemac/go/pkg/mod/github.com/influxdata/flux@v0.31.1/execute/dispatcher.go:80 +0x95
created by github.com/influxdata/flux/execute.(*poolDispatcher).Start
/Users/georgemac/go/pkg/mod/github.com/influxdata/flux@v0.31.1/execute/dispatcher.go:61 +0x7e
This was interestingly after I attempted to retry a task which was failing because of this bug: https://github.com/influxdata/influxdb/issues/14239
The flux I was attempting to run was the following:
option task = {name: "Daily Memory Usage", every: 24h, offset: 2m}
data = from(bucket: "primary")
|> range(start: -task.every)
|> filter(fn: (r) =>
(r._measurement == "mem"))
|> filter(fn: (r) =>
(r._field == "available"))
data
|> aggregateWindow(every: 1h, fn: mean)
|> to(bucket: "primary_downsampled", org: "InfluxData")
Where primary
is a bucket containing the example system telegraf config metrics.
In the hopes that this aids debugging.
Found this in the transpiler logs in the 2.0 cloud. I'm not sure why this didn't show up on the dashboard: