Error is reported when the events are attempted to insert into Influxdb:
2020-09-11 14:45:26 -0400 [warn]: #0 fluent/log.rb:348:warn: Skip record '{"type"=>"queue_length", "type_instance"=>"", "time"=>1599849947.5258746, "interval"=>10.0, "plugin"=>"network", "plugin_instance"=>"", "host"=>"agent", "value"=>0.0}' in 'metrics', because either record has no value or at least a value is 'nil' or empty string inside the record.
Then, shortly after the stacktrace is logged:
2020-09-11 14:45:25 -0400 [debug]: #0 fluent/log.rb:306:debug: taking back chunk for errors. chunk="5af0e12612733dc648753b23e8df102f"
2020-09-11 14:45:25 -0400 [warn]: #0 fluent/log.rb:348:warn: failed to flush the buffer. retry_time=4 next_retry_seconds=2020-09-11 14:45:33 15583002651185026533/274877906944000000000 -0400 chunk="5af0e12612733dc648753b23e8df102f" error_class=InfluxDB::Error error="{\"error\":\"unable to parse 'metrics type=\\\"if_packets\\\",interval=10.0,plugin=\\\"network\\\",host=\\\"agent\\\",rx=0i,tx=31i 1599849877.5258396': bad timestamp\\nunable to parse 'metrics type=\\\"total_values\\\",type_instance=\\\"dispatch-accepted\\\",interval=10.0,plugin=\\\"network\\\",host=\\\"agent\\\",value=0i 1599849877.52585': bad timestamp\\nunable to parse 'metrics type=\\\"total_values\\\",type_instance=\\\"dispatch-rejected\\\",interval=10.0,plugin=\\\"network\\\",host=\\\"agent\\\",value=0i 1599849877.5258567': bad timestamp\\nunable to parse 'metrics type=\\\"total_values\\\",type_instance=\\\"send-accepted\\\",interval=10.0,plugin=\\\"network\\\",host=\\\"agent\\\",value=959i 1599849877.5258632': bad timestamp"}\n"
2020-09-11 14:45:25 -0400 [warn]: #0 plugin/output.rb:1189:rescue in try_flush: suppressed same stacktrace
Config
Relevant config pieces:
Collectd:
Interval 10.0
LoadPlugin cpu
LoadPlugin load
LoadPlugin network
LoadPlugin memory
<Plugin network>
Server "127.0.0.1" "25826"
ReportStats true
</Plugin>
...
Fluentbit (td-agent-bit)
[INPUT]
Name collectd
Tag metrics
Listen 127.0.0.1
Port 25826
TypesDB /usr/share/collectd/types.db
...
[OUTPUT]
Name forward
Match *
Host 192.168.xx.yy
Port 24224
Fluentd server (docker)
It's a custom container based on Ruby 2.7, and the services are installed in the dockerfile with:
To check my config, I changed the output to Elasticsearch from Influx, and it worked as expected. I don't really want to store all my metrics in ES though, would prefer them in InfluxDB.
My config involves many more data types, and the other pipelines are all working great, just this one that's causing issues unfortunately.
Does anybody know if this is a fluentd bug, or a mistake in my setup or methodology?
When transporting Collectd metrics to Influxdb, fluentd isn't able to insert the data into the influxdb database properly and creates multiple errors.
My pipline looks like this:
Error Messages:
Config
Relevant config pieces:
It's a custom container based on Ruby 2.7, and the services are installed in the dockerfile with:
Then, the config that's mounted in the container looks like this:
Influxdb itself is very basic with no auth or anything.
Testing
To check my config, I changed the output to Elasticsearch from Influx, and it worked as expected. I don't really want to store all my metrics in ES though, would prefer them in InfluxDB.
My config involves many more data types, and the other pipelines are all working great, just this one that's causing issues unfortunately.
Does anybody know if this is a fluentd bug, or a mistake in my setup or methodology?
Thanks!