influxdata / telegraf

Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
https://influxdata.com/telegraf
MIT License
14.51k stars 5.55k forks source link

Using plugins/beat to collect filebeat metrics to prometheus will cause prometheus memory to be full #10253

Closed yanming-zhang closed 5 months ago

yanming-zhang commented 2 years ago

Relevent telegraf.conf

[[inputs.beat]]
    url = "http://$HOSTIP:5066"
    include = ["beat", "libbeat", "system", "filebeat"]
    timeout = "10s"

[[outputs.prometheus_client]]
    listen = ":9273"

System info

telegraf-1.20.4 centos-7.6 kubenetes-1.16.9

Docker

No response

Steps to reproduce

  1. Use plugins/beat to collect filebeat metrics to prometheus
  2. A large number of indicators similar to the following appear: beat_filebeat_harvester_files_0066da78_a3af_4ca2_9360_d8372861e5a7_read_offset beat_filebeat_harvester_files_0066da78_a3af_4ca2_9360_d8372861e5a7_size
  3. As a result, the memory occupied by prometheus continues to grow, and the system memory has recently been full

Expected behavior

I want to collect each type of metrics in beat, libbeat, system, and filebeat: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/beat

Actual behavior

image

Additional info

No response

yanming-zhang commented 2 years ago

@powersj

powersj commented 6 months ago

@yanming-zhang,

I apologize I never saw this ping. Did you ever work out what was going on or is this even still an issue? If it is, could you collect a memory profile?

telegraf-tiger[bot] commented 5 months ago

Hello! I am closing this issue due to inactivity. I hope you were able to resolve your problem, if not please try posting this question in our Community Slack or Community Forums or provide additional details in this issue and reqeust that it be re-opened. Thank you!