Open turboproc opened 2 years ago
@sspaink Have you seen this behavior before? I'm not sure if gjson is using the memory or if there is something telegraf needs to do differently.
Hi @sspaink, no this started to occur when I included json_v2. Before that I've been using Telegraf for a long time without any issues.
@sspaink what is the purpose of the cartesianProduct function in the JSONv2 parser? If I parse a simple JSON file with a single value e.g. {"state_num": 7}
the memory usage remains low after parsing a message, but as soon as the number of fields grow, as with this bug, the memory usage in that function grows greatly as soon as that and the mergeMetric functions are called:
This is a real problem and still persists. One scenario is pulling metrics from '${ELASTIC_SEARCH_URL}/_stats/_all' and it eats GBs of ram until an out of memory exception gets thrown.
@SudoNova I would highly recommend looking at the xpath parser in the mean time, or go look at our elasticserach plugin that pulls from that endpoint already!
Relevant telegraf.conf
Logs from Telegraf
System info
Telegraf 1.22.3 (git: HEAD ff950615) / PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
Docker
No response
Steps to reproduce
...
Expected behavior
Would expect no significant change in memory usage. In idle stat stats are like
Actual behavior
As soon as a json message is send to telegraf, the memory usage will increase excessive. Waiting will not resolve the situation, only a restart will solve it bit just for the duration of 1 message.
Additional info
Message send to telegraf using Postman is as follows:
Also tried this on a different Linux distribution (Ubuntu 20.04) which shows the same behaviour.