Closed Thrystan01204 closed 1 year ago
Can you please enable debug mode and enable the [[ouputs.file]] plugin and provide the logs from a failing system?
Hello, Here are everything I was able to get.
I enabled the debug mode on telegraf and the outputs.file
, in the file I got only the metrics of the CPU input plugin.
I added the logfile for telegraf and got the output below.
Debug mode telegraf :
2023-01-03T10:52:16Z I! Starting Telegraf 1.25.0
2023-01-03T10:52:16Z I! Available plugins: 228 inputs, 9 aggregators, 26 processors, 21 parsers, 57 outputs, 2 secret-stores
2023-01-03T10:52:16Z I! Loaded inputs: cpu syslog
2023-01-03T10:52:16Z I! Loaded aggregators:
2023-01-03T10:52:16Z I! Loaded processors:
2023-01-03T10:52:16Z I! Loaded secretstores:
2023-01-03T10:52:16Z I! Loaded outputs: file influxdb_v2
2023-01-03T10:52:16Z I! Tags enabled: host=telegraf
2023-01-03T10:52:16Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"telegraf", Flush Interval:10s
2023-01-03T10:52:16Z D! [agent] Initializing plugins
2023-01-03T10:52:16Z D! [agent] Connecting outputs
2023-01-03T10:52:16Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2023-01-03T10:52:16Z D! [agent] Successfully connected to outputs.influxdb_v2
2023-01-03T10:52:16Z D! [agent] Attempting connection to [outputs.file]
2023-01-03T10:52:16Z D! [agent] Successfully connected to outputs.file
2023-01-03T10:52:16Z D! [agent] Starting service inputs
2023-01-03T10:52:26Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-01-03T10:52:26Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2023-01-03T10:52:36Z D! [outputs.file] Wrote batch of 5 metrics in 179.758µs
2023-01-03T10:52:36Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2023-01-03T10:52:36Z E! [outputs.influxdb_v2] Failed to write metric to tlibucket (will be dropped: 411 Length Required): 411 Length Required
2023-01-03T10:52:36Z D! [outputs.influxdb_v2] Wrote batch of 5 metrics in 2.526753ms
2023-01-03T10:52:36Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
influxdb logs from container attached :
ts=2022-12-19T04:26:46.252836Z lvl=info msg="Welcome to InfluxDB" log_id=0er9zxR0000 version=v2.6.0 commit=24a2b621ea build_date=2022-12-15T18:47:00Z log_level=debug
ts=2022-12-19T04:26:46.252914Z lvl=debug msg="loaded config file" log_id=0er9zxR0000 path=/tmp/config.yml
ts=2022-12-19T04:26:46.252921Z lvl=warn msg="nats-port argument is deprecated and unused" log_id=0er9zxR0000
ts=2022-12-19T04:26:46.353107Z lvl=info msg="Resources opened" log_id=0er9zxR0000 service=bolt path=/var/lib/influxdb2/influxd.bolt
ts=2022-12-19T04:26:46.353275Z lvl=info msg="Resources opened" log_id=0er9zxR0000 service=sqlite path=/var/lib/influxdb2/influxd.sqlite
ts=2022-12-19T04:26:46.377654Z lvl=info msg="Bringing up metadata migrations" log_id=0er9zxR0000 service="KV migrations" migration_count=20
ts=2022-12-19T04:26:46.377678Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="initial migration" target_state=up migration_event=started
ts=2022-12-19T04:26:47.433708Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="initial migration" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.433750Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add index \"userresourcemappingsbyuserindexv1\"" target_state=up migration_event=started
ts=2022-12-19T04:26:47.434962Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add index \"userresourcemappingsbyuserindexv1\"" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.434994Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="migrate task owner id" target_state=up migration_event=started
ts=2022-12-19T04:26:47.435821Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="migrate task owner id" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.435846Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="create DBRP buckets" target_state=up migration_event=started
ts=2022-12-19T04:26:47.437855Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="create DBRP buckets" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.437874Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="create pkger stacks buckets" target_state=up migration_event=started
ts=2022-12-19T04:26:47.441044Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="create pkger stacks buckets" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.441070Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="delete sessionsv1 bucket" target_state=up migration_event=started
ts=2022-12-19T04:26:47.442907Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="delete sessionsv1 bucket" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.442925Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="Create TSM metadata buckets" target_state=up migration_event=started
ts=2022-12-19T04:26:47.444013Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="Create TSM metadata buckets" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.444031Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="Create Legacy authorization buckets" target_state=up migration_event=started
ts=2022-12-19T04:26:47.445432Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="Create Legacy authorization buckets" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.445451Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="Create legacy auth password bucket" target_state=up migration_event=started
ts=2022-12-19T04:26:47.446687Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="Create legacy auth password bucket" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.446708Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add index \"telegrafbyorgindexv1\"" target_state=up migration_event=started
ts=2022-12-19T04:26:47.447807Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add index \"telegrafbyorgindexv1\"" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.447825Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="populate dashboards owner id" target_state=up migration_event=started
ts=2022-12-19T04:26:47.448579Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="populate dashboards owner id" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.448600Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add index \"dbrpbyorgv1\"" target_state=up migration_event=started
ts=2022-12-19T04:26:47.449686Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add index \"dbrpbyorgv1\"" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.449705Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="repair DBRP owner and bucket IDs" target_state=up migration_event=started
ts=2022-12-19T04:26:47.450462Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="repair DBRP owner and bucket IDs" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.450492Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add index \"dbrpbyorgv1\"" target_state=up migration_event=started
ts=2022-12-19T04:26:47.451449Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add index \"dbrpbyorgv1\"" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.451467Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="record shard group durations in bucket metadata" target_state=up migration_event=started
ts=2022-12-19T04:26:47.452198Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="record shard group durations in bucket metadata" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.452216Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add annotations and notebooks resource types to operator token" target_state=up migration_event=started
ts=2022-12-19T04:26:47.453290Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add annotations and notebooks resource types to operator token" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.453306Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add annotations and notebooks resource types to all-access tokens" target_state=up migration_event=started
ts=2022-12-19T04:26:47.454391Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add annotations and notebooks resource types to all-access tokens" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.454409Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="repair missing shard group durations" target_state=up migration_event=started
ts=2022-12-19T04:26:47.455133Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="repair missing shard group durations" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.455151Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add remotes and replications resource types to operator and all-access tokens" target_state=up migration_event=started
ts=2022-12-19T04:26:47.456138Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="add remotes and replications resource types to operator and all-access tokens" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.456156Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="create remotes and replications metrics buckets" target_state=up migration_event=started
ts=2022-12-19T04:26:47.459197Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="KV migrations" migration_name="create remotes and replications metrics buckets" target_state=up migration_event=completed
ts=2022-12-19T04:26:47.459840Z lvl=info msg="Bringing up metadata migrations" log_id=0er9zxR0000 service="SQL migrations" migration_count=8
ts=2022-12-19T04:26:47.459860Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="SQL migrations" migration_name=0001_create_migrations_table.up.sql
ts=2022-12-19T04:26:47.461538Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="SQL migrations" migration_name=0002_create_notebooks_table.up.sql
ts=2022-12-19T04:26:47.463792Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="SQL migrations" migration_name=0003_create_annotations_tables.up.sql
ts=2022-12-19T04:26:47.466091Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="SQL migrations" migration_name=0004_create_remotes_table.up.sql
ts=2022-12-19T04:26:47.468066Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="SQL migrations" migration_name=0005_create_replications_table.up.sql
ts=2022-12-19T04:26:47.470319Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="SQL migrations" migration_name=0006_migrate_replications_foreign_key.up.sql
ts=2022-12-19T04:26:47.475875Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="SQL migrations" migration_name=0007_migrate_replications_add_bucket_name.up.sql
ts=2022-12-19T04:26:47.482075Z lvl=debug msg="Executing metadata migration" log_id=0er9zxR0000 service="SQL migrations" migration_name=0008_migrate_remotes_null_remote_org.up.sql
ts=2022-12-19T04:26:47.490938Z lvl=debug msg="buckets find" log_id=0er9zxR0000 store=new took=0.032ms
ts=2022-12-19T04:26:47.491060Z lvl=info msg="Using data dir" log_id=0er9zxR0000 service=storage-engine service=store path=/var/lib/influxdb2/engine/data
ts=2022-12-19T04:26:47.491156Z lvl=info msg="Compaction settings" log_id=0er9zxR0000 service=storage-engine service=store max_concurrent_compactions=4 throughput_bytes_per_second=50331648 throughput_bytes_per_second_burst=50331648
ts=2022-12-19T04:26:47.491175Z lvl=info msg="Open store (start)" log_id=0er9zxR0000 service=storage-engine service=store op_name=tsdb_open op_event=start
ts=2022-12-19T04:26:47.491229Z lvl=info msg="Open store (end)" log_id=0er9zxR0000 service=storage-engine service=store op_name=tsdb_open op_event=end op_elapsed=0.055ms
ts=2022-12-19T04:26:47.491265Z lvl=info msg="Starting retention policy enforcement service" log_id=0er9zxR0000 service=retention check_interval=30m
ts=2022-12-19T04:26:47.491282Z lvl=info msg="Starting precreation service" log_id=0er9zxR0000 service=shard-precreation check_interval=10m advance_period=30m
ts=2022-12-19T04:26:47.492143Z lvl=info msg="Starting query controller" log_id=0er9zxR0000 service=storage-reads concurrency_quota=1024 initial_memory_bytes_quota_per_query=9223372036854775807 memory_bytes_quota_per_query=9223372036854775807 max_memory_bytes=0 queue_size=1024
ts=2022-12-19T04:26:47.495023Z lvl=info msg="Configuring InfluxQL statement executor (zeros indicate unlimited)." log_id=0er9zxR0000 max_select_point=0 max_select_series=0 max_select_buckets=0
ts=2022-12-19T04:26:47.500639Z lvl=info msg=Starting log_id=0er9zxR0000 service=telemetry interval=8h
ts=2022-12-19T04:26:47.500845Z lvl=info msg=Listening log_id=0er9zxR0000 service=tcp-listener transport=http addr=:9999 port=9999
ts=2022-12-19T04:26:48.136966Z lvl=debug msg=Request log_id=0er9zxR0000 service=http method=GET host=localhost:9999 path=/health query= proto=HTTP/1.1 status_code=200 response_size=137 content_length=0 referrer= remote=127.0.0.1:58742 user_agent=influx took=0.147ms body=
ts=2022-12-19T04:26:48.146643Z lvl=debug msg="is onboarding" log_id=0er9zxR0000 handler=onboard took=0.083ms
ts=2022-12-19T04:26:48.146781Z lvl=debug msg="Onboarding eligibility check finished" log_id=0er9zxR0000 result=true
ts=2022-12-19T04:26:48.147008Z lvl=debug msg=Request log_id=0er9zxR0000 service=http method=GET host=localhost:9999 path=/api/v2/setup query= proto=HTTP/1.1 status_code=200 response_size=20 content_length=0 referrer= remote=127.0.0.1:58748 user_agent=influx took=0.556ms body=
ts=2022-12-19T04:26:48.148793Z lvl=debug msg="user create" log_id=0er9zxR0000 store=new took=0.696ms
ts=2022-12-19T04:26:48.227078Z lvl=debug msg="set password" log_id=0er9zxR0000 store=new took=78.255ms
ts=2022-12-19T04:26:48.227793Z lvl=debug msg="org find by ID" log_id=0er9zxR0000 store=new took=0.068ms
ts=2022-12-19T04:26:48.228352Z lvl=debug msg="bucket create" log_id=0er9zxR0000 store=new took=0.629ms
ts=2022-12-19T04:26:48.229079Z lvl=debug msg="org find by ID" log_id=0er9zxR0000 store=new took=0.044ms
ts=2022-12-19T04:26:48.229525Z lvl=debug msg="bucket create" log_id=0er9zxR0000 store=new took=0.490ms
ts=2022-12-19T04:26:48.230541Z lvl=debug msg="urm create" log_id=0er9zxR0000 store=new took=0.572ms
ts=2022-12-19T04:26:48.230590Z lvl=debug msg="org create" log_id=0er9zxR0000 store=new took=3.451ms
ts=2022-12-19T04:26:48.230980Z lvl=debug msg="urm create" log_id=0er9zxR0000 store=new took=0.378ms
ts=2022-12-19T04:26:48.231071Z lvl=debug msg="org find by ID" log_id=0er9zxR0000 store=new took=0.051ms
ts=2022-12-19T04:26:48.231557Z lvl=debug msg="bucket create" log_id=0er9zxR0000 store=new took=0.537ms
ts=2022-12-19T04:26:48.232083Z lvl=debug msg="user find by ID" log_id=0er9zxR0000 store=new took=0.025ms
ts=2022-12-19T04:26:48.232143Z lvl=debug msg="org find by ID" log_id=0er9zxR0000 store=new took=0.037ms
ts=2022-12-19T04:26:48.232884Z lvl=debug msg="onboard initial user" log_id=0er9zxR0000 handler=onboard took=84.800ms
ts=2022-12-19T04:26:48.232927Z lvl=debug msg="Onboarding setup completed" log_id=0er9zxR0000 results="&{0xc001f32200 0xc001b352c0 0xc001f770e0 0xc001f775f0}"
ts=2022-12-19T04:26:48.233570Z lvl=debug msg=Request log_id=0er9zxR0000 service=http method=POST host=localhost:9999 path=/api/v2/setup query= proto=HTTP/1.1 status_code=201 response_size=5613 content_length=-1 referrer= remote=127.0.0.1:58748 user_agent=Sha took=85.691ms
User Organization Bucket
admin test tristanbucket
ts=2022-12-19T04:26:48.243817Z lvl=debug msg="user find by ID" log_id=0er9zxR0000 store=new took=0.027ms
ts=2022-12-19T04:26:48.243923Z lvl=debug msg="users find" log_id=0er9zxR0000 store=new took=0.027ms
ts=2022-12-19T04:26:48.243954Z lvl=debug msg="Users retrieved" log_id=0er9zxR0000 handler=user users=[0xc0001d6680]
ts=2022-12-19T04:26:48.245016Z lvl=debug msg=Request log_id=0er9zxR0000 service=http method=GET host=localhost:9999 path=/api/v2/users query= proto=HTTP/1.1 status_code=200 response_size=204 content_length=0 referrer= remote=127.0.0.1:58758 user_agent=Sha took=0.618ms body=
ts=2022-12-19T04:26:48.254163Z lvl=debug msg="user find by ID" log_id=0er9zxR0000 store=new took=0.044ms
ts=2022-12-19T04:26:48.254387Z lvl=debug msg="orgs find" log_id=0er9zxR0000 store=new took=0.076ms
ts=2022-12-19T04:26:48.254495Z lvl=debug msg="Orgs retrieved" log_id=0er9zxR0000 handler=org org=[0xc0022f8b40]
ts=2022-12-19T04:26:48.254825Z lvl=debug msg=Request log_id=0er9zxR0000 service=http method=GET host=localhost:9999 path=/api/v2/orgs query="org=test" proto=HTTP/1.1 status_code=200 response_size=699 content_length=0 referrer= remote=127.0.0.1:58762 user_agent=Sha took=0.925ms body=
ts=2022-12-19T04:26:48.263263Z lvl=debug msg="user find by ID" log_id=0er9zxR0000 store=new took=0.023ms
ts=2022-12-19T04:26:48.263538Z lvl=debug msg="org find" log_id=0er9zxR0000 store=new took=0.051ms
ts=2022-12-19T04:26:48.263683Z lvl=debug msg="buckets find" log_id=0er9zxR0000 store=new took=0.197ms
ts=2022-12-19T04:26:48.263712Z lvl=debug msg="Buckets retrieved" log_id=0er9zxR0000 handler=bucket buckets=[0xc0023ad170]
ts=2022-12-19T04:26:48.263836Z lvl=debug msg="bucket find by ID" log_id=0er9zxR0000 store=new took=0.050ms
ts=2022-12-19T04:26:48.263875Z lvl=debug msg="labels for resource find" log_id=0er9zxR0000 handler=labels took=0.092ms
ts=2022-12-19T04:26:48.264248Z lvl=debug msg=Request log_id=0er9zxR0000 service=http method=GET host=localhost:9999 path=/api/v2/buckets query="name=tristanbucket&org=test" proto=HTTP/1.1 status_code=200 response_size=878 content_length=0 referrer= remote=127.0.0.1:58770 user_agent=influx took=1.235ms body=
ts=2022-12-19T04:26:48.673469Z lvl=info msg="Welcome to InfluxDB" log_id=0er9~5tG000 version=v2.6.0 commit=24a2b621ea build_date=2022-12-15T18:47:00Z log_level=debug
ts=2022-12-19T04:26:48.673558Z lvl=debug msg="loaded config file" log_id=0er9~5tG000 path=/etc/influxdb2/config.yml
ts=2022-12-19T04:26:48.673565Z lvl=warn msg="nats-port argument is deprecated and unused" log_id=0er9~5tG000
ts=2022-12-19T04:26:48.674657Z lvl=info msg="Resources opened" log_id=0er9~5tG000 service=bolt path=/var/lib/influxdb2/influxd.bolt
ts=2022-12-19T04:26:48.674738Z lvl=info msg="Resources opened" log_id=0er9~5tG000 service=sqlite path=/var/lib/influxdb2/influxd.sqlite
ts=2022-12-19T04:26:48.677983Z lvl=debug msg="buckets find" log_id=0er9~5tG000 store=new took=0.126ms
ts=2022-12-19T04:26:48.678012Z lvl=info msg="Checking InfluxDB metadata for prior version." log_id=0er9~5tG000 bolt_path=/var/lib/influxdb2/influxd.bolt
ts=2022-12-19T04:26:48.678151Z lvl=info msg="Using data dir" log_id=0er9~5tG000 service=storage-engine service=store path=/var/lib/influxdb2/engine/data
ts=2022-12-19T04:26:48.678184Z lvl=info msg="Compaction settings" log_id=0er9~5tG000 service=storage-engine service=store max_concurrent_compactions=4 throughput_bytes_per_second=50331648 throughput_bytes_per_second_burst=50331648
ts=2022-12-19T04:26:48.678206Z lvl=info msg="Open store (start)" log_id=0er9~5tG000 service=storage-engine service=store op_name=tsdb_open op_event=start
ts=2022-12-19T04:26:48.678314Z lvl=info msg="Open store (end)" log_id=0er9~5tG000 service=storage-engine service=store op_name=tsdb_open op_event=end op_elapsed=0.096ms
ts=2022-12-19T04:26:48.678355Z lvl=info msg="Starting retention policy enforcement service" log_id=0er9~5tG000 service=retention check_interval=30m
ts=2022-12-19T04:26:48.678369Z lvl=info msg="Starting precreation service" log_id=0er9~5tG000 service=shard-precreation check_interval=10m advance_period=30m
ts=2022-12-19T04:26:48.679410Z lvl=info msg="Starting query controller" log_id=0er9~5tG000 service=storage-reads concurrency_quota=1024 initial_memory_bytes_quota_per_query=9223372036854775807 memory_bytes_quota_per_query=9223372036854775807 max_memory_bytes=0 queue_size=1024
ts=2022-12-19T04:26:48.682480Z lvl=info msg="Configuring InfluxQL statement executor (zeros indicate unlimited)." log_id=0er9~5tG000 max_select_point=0 max_select_series=0 max_select_buckets=0
ts=2022-12-19T04:26:48.688703Z lvl=info msg=Starting log_id=0er9~5tG000 service=telemetry interval=8h
ts=2022-12-19T04:26:48.689003Z lvl=info msg=Listening log_id=0er9~5tG000 service=tcp-listener transport=http addr=:8086 port=8086
Hello,
Just wanted to know if someone had an update on this issue ?
I still work on it and don't understand what is wrong, I changed, distribution, configs files but nothing helps.
Can you reproduce this outside of docker? As-in can you run telegraf outside of docker and do you see the same issue?
Hello,
I tried with influxdb on docker (same config as earlier) and telegraf on the machine with the package. With this setup it works
telegraf.conf :
# Global tags can be specified here in key="value" format.
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
## Environment variables can be used as tags, and throughout the config file
# user = "$USER"
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will send metrics to outputs in batches of at most
## metric_batch_size metrics.
## This controls the size of writes that Telegraf sends to output plugins.
metric_batch_size = 1000
## Maximum number of unwritten metrics per output. Increasing this value
## allows for longer periods of output downtime without dropping metrics at the
## cost of higher maximum memory usage.
metric_buffer_limit = 10000
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Collection offset is used to shift the collection by the given amount.
## This can be be used to avoid many plugins querying constraint devices
## at the same time by manually scheduling them in time.
# collection_offset = "0s"
## Default flushing interval for all outputs. Maximum flush_interval will be
## flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## Collected metrics are rounded to the precision specified. Precision is
## specified as an interval with an integer + unit (e.g. 0s, 10ms, 2us, 4s).
## Valid time units are "ns", "us" (or "µs"), "ms", "s".
##
## By default or when set to "0s", precision will be set to the same
## timestamp order as the collection interval, with the maximum being 1s:
## ie, when interval = "10s", precision will be "1s"
## when interval = "250ms", precision will be "1ms"
##
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
precision = "0s"
## Log at debug level.
debug = true
## Log only error level messages.
# quiet = false
## Log target controls the destination for logs and can be one of "file",
## "stderr" or, on Windows, "eventlog". When set to "file", the output file
## is determined by the "logfile" setting.
logtarget = "file"
## Name of the file to be logged to when using the "file" logtarget. If set to
## the empty string then logs are written to stderr.
logfile = "/tmp/telegraf.log"
## The logfile will be rotated after the time interval specified. When set
## to 0 no time based rotation is performed. Logs are rotated only when
## written to, if there is no log activity rotation may be delayed.
# logfile_rotation_interval = "0h"
## The logfile will be rotated when it becomes larger than the specified
## size. When set to 0 no size based rotation is performed.
logfile_rotation_max_size = "10MB"
## Maximum number of rotated archives to keep, any older logs are deleted.
## If set to -1, no archives are removed.
# logfile_rotation_max_archives = 5
## Pick a timezone to use when logging or type 'local' for local time.
## Example: America/Chicago
# log_with_timezone = ""
## Override default hostname, if empty use os.Hostname()
hostname = ""
## If set to true, do no set the "host" tag in the telegraf agent.
omit_hostname = false
## Method of translating SNMP objects. Can be "netsnmp" (deprecated) which
## translates by calling external programs snmptranslate and snmptable,
## or "gosmi" which translates using the built-in gosmi library.
# snmp_translator = "netsnmp"
# # Configuration for sending metrics to InfluxDB 2.0
[[outputs.influxdb_v2]]
# ## The URLs of the InfluxDB cluster nodes.
# ##
# ## Multiple URLs can be specified for a single cluster, only ONE of the
# ## urls will be written to each interval.
# ## ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
urls = ["http://127.0.0.1:8087"]
#
# ## Token for authentication.
token = "my_token"
#
# ## Organization is the name of the organization you wish to write to.
organization = "tli"
#
# ## Destination bucket to write into.
bucket = "tlibucket"
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics
collect_cpu_time = false
## If true, compute and report the sum of all non-idle CPU states
report_active = false
## If true and the info is available then add core_id and physical_id tags
core_tags = false
logs telegraf :
2023-01-04T05:04:28Z I! Using config file: /etc/telegraf/telegraf.conf
2023-01-04T05:04:28Z E! Unable to open /tmp/telegraf.log (open /tmp/telegraf.log: permission denied), using stderr
2023-01-04T05:04:28Z I! Starting Telegraf 1.25.0
2023-01-04T05:04:28Z I! Available plugins: 228 inputs, 9 aggregators, 26 processors, 21 parsers, 57 outputs, 2 secret-stores
2023-01-04T05:04:28Z I! Loaded inputs: cpu diskio file kernel mem processes swap syslog system
2023-01-04T05:04:28Z I! Loaded aggregators:
2023-01-04T05:04:28Z I! Loaded processors:
2023-01-04T05:04:28Z I! Loaded secretstores:
2023-01-04T05:04:28Z I! Loaded outputs: influxdb_v2
2023-01-04T05:04:28Z I! Tags enabled: host=dockertest
2023-01-04T05:04:28Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"dockertest", Flush Interval:10s
2023-01-04T05:04:28Z D! [agent] Initializing plugins
2023-01-04T05:04:28Z D! [agent] Connecting outputs
2023-01-04T05:04:28Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2023-01-04T05:04:28Z D! [agent] Successfully connected to outputs.influxdb_v2
2023-01-04T05:04:28Z D! [agent] Starting service inputs
2023-01-04T05:04:38Z D! [outputs.influxdb_v2] Wrote batch of 13 metrics in 70.31349ms
2023-01-04T05:04:38Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-01-04T05:04:48Z D! [outputs.influxdb_v2] Wrote batch of 18 metrics in 5.552015ms
2023-01-04T05:04:48Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-01-04T05:04:58Z D! [outputs.influxdb_v2] Wrote batch of 18 metrics in 2.765803ms
2023-01-04T05:04:58Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-01-04T05:05:08Z D! [outputs.influxdb_v2] Wrote batch of 18 metrics in 3.667906ms
I tried with influxdb on docker (same config as earlier) and telegraf on the machine with the package. With this setup it works
Great. This tends to indicate it is not an issue with telegraf itself, but possibly with your docker or compose config. I would jump into the telegraf container and ensure that all your env variables are getting set correctly and ensure the telegraf user can see them. Additionally, I would verify that the required ports are working as you would expect and that you can reach the influxdb server.
Hello, You're right, it's coming from my container.
I used tcpdump to see packets I was sending to my influxdb (172.26.0.2) and it doesn't send anything :
root@67b947d5fbbd:/# tcpdump -i any dst 172.26.0.2
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
Same thing on the influxdb container :
root@57d110c04253:/# tcpdump -i any port 8086
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
But it's strange, on the log it seems it can reach the db :
2023-01-04T05:04:28Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2023-01-04T05:04:28Z D! [agent] Successfully connected to outputs.influxdb_v2
Moreover, the twos are able to ping each other :
From influxdb :
root@436875037e67:/# ping telegraf
PING telegraf (172.27.0.3) 56(84) bytes of data.
64 bytes from docker_influx-telegraf-1.docker_influx_default (172.27.0.3): icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from docker_influx-telegraf-1.docker_influx_default (172.27.0.3): icmp_seq=2 ttl=64 time=0.082 ms
64 bytes from docker_influx-telegraf-1.docker_influx_default (172.27.0.3): icmp_seq=3 ttl=64 time=0.075 ms
64 bytes from docker_influx-telegraf-1.docker_influx_default (172.27.0.3): icmp_seq=4 ttl=64 time=0.078 ms
^C
--- telegraf ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 81ms
rtt min/avg/max/mdev = 0.075/0.078/0.082/0.011 ms
From telegraf :
root@1434d609ead7:/# ping influxdb
PING influxdb (172.27.0.2) 56(84) bytes of data.
64 bytes from docker_influx-influxdb-1.docker_influx_default (172.27.0.2): icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from docker_influx-influxdb-1.docker_influx_default (172.27.0.2): icmp_seq=2 ttl=64 time=0.065 ms
64 bytes from docker_influx-influxdb-1.docker_influx_default (172.27.0.2): icmp_seq=3 ttl=64 time=0.056 ms
64 bytes from docker_influx-influxdb-1.docker_influx_default (172.27.0.2): icmp_seq=4 ttl=64 time=0.058 ms
^C
--- influxdb ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3062ms
rtt min/avg/max/mdev = 0.056/0.059/0.065/0.003 ms
Thank you, I will continue to search the problem now that I have some hints.
Hello,
I made a mistake, I didn't target the right influxdb. Even with the setup influxdb on docker and telegraf package it doesn't work.
2023-01-09T08:08:18Z I! Using config file: /etc/telegraf/telegraf.conf
2023-01-09T08:08:18Z I! Starting Telegraf 1.25.0
2023-01-09T08:08:18Z I! Available plugins: 228 inputs, 9 aggregators, 26 processors, 21 parsers, 57 outputs, 2 secret-stores
2023-01-09T08:08:18Z I! Loaded inputs: syslog
2023-01-09T08:08:18Z I! Loaded aggregators:
2023-01-09T08:08:18Z I! Loaded processors:
2023-01-09T08:08:18Z I! Loaded secretstores:
2023-01-09T08:08:18Z I! Loaded outputs: influxdb_v2
2023-01-09T08:08:18Z I! Tags enabled: host=standalonetelegraf
2023-01-09T08:08:18Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"standalonetelegraf", Flush Interval:10s
2023-01-09T08:08:18Z D! [agent] Initializing plugins
2023-01-09T08:08:18Z D! [agent] Connecting outputs
2023-01-09T08:08:18Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2023-01-09T08:08:18Z D! [agent] Successfully connected to outputs.influxdb_v2
2023-01-09T08:08:18Z D! [agent] Starting service inputs
2023-01-09T08:08:28Z E! [outputs.influxdb_v2] Failed to write metric to tlibucket (will be dropped: 411 Length Required): 411 Length Required
2023-01-09T08:08:28Z D! [outputs.influxdb_v2] Wrote batch of 94318 metrics in 1.706911ms
2023-01-09T08:08:28Z D! [outputs.influxdb_v2] Buffer fullness: 63 / 1000000 metrics
2023-01-09T08:08:38Z E! [outputs.influxdb_v2] Failed to write metric to tlibucket (will be dropped: 411 Length Required): 411 Length Required
2023-01-09T08:08:38Z D! [outputs.influxdb_v2] Wrote batch of 93379 metrics in 1.704625ms
It seems to work with the urls = ["http://127.0.0.1:8086"]
but not with urls = ["http://influxdb:8086"]
, here influxdb is for the docker, but even with telegraf package, when I put the URL for the docker influx it doesn't work.
It seems to work with the urls = ["http://127.0.0.1:8086"] but not with urls = ["http://influxdb:8086"]
Are these the same influxdb server?
As of now this is not pointing to an issue with Telegraf, so I would suggest possibly asking for help on our forums or on our slack to see if others have ideas on how to resolve this.
Yes, I used the same server each time.
When I use telegraf on the VM, if I put urls = ["http://127.0.0.1:8086"]
it works and writes to influxdb.
But if I use the IP of the VM, or the name of the docker, it doesn't works and prints the error 411 length required.
I will give a try to your forum and see if someone have an idea.
Thank you
Hello,
just to let you know I found the error. In my organization we have a proxy. Every time telegraf was using the proxy (even in the docker network, don't know why).
Every time telegraf was using the proxy
Thanks
Hello guys, I'm trying to make an influx + telegraf on docker. However every time I achieve to launch them, there is an error I don't understand :
I tried on several machines and it doesn't work only on this one :
Linux dockertests 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
This is a VM on VMware VSphere 5.5 The others were Windows 11 with Docker Desktop and Windows 11 + WSL 2 Debian and Docker Desktop integrationI have this docker compose files :
And the environment variables :
and finally the telegraf conf :