apache / apisix

The Cloud-Native API Gateway
https://apisix.apache.org/blog/
Apache License 2.0
14.51k stars 2.52k forks source link

the prometheus metrics API is tool slow #7353

Closed hansedong closed 2 years ago

hansedong commented 2 years ago

Description

I use APISIX in our microservice platform, there are thousands of microservices, that is, there are thousands of Route and Upstream resources in APISIX.

When I switched the online traffic to APISIX and the monitoring platform Prometheus fetched time series data from APISIX's metrics API, APISIX's response took a long time, which in turn caused Prometheus to fetch data timeout.

In order to check the network reasons, on the APISIX node, it is very slow to get metrics data through curl, so the root of the problem lies in APISIX itself.

curl "http://127.0.0.1:9091/apisix/prometheus/metrics" > /tmp/metrics
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7092k    0 7092k    0     0   351k      0 --:--:--  0:00:22 --:--:-- 1710k

As shown above:

  1. the metrics data is only 8MB in size.
  2. The response time of APISIX metrics API is 22 seconds.

How should I troubleshoot this issue?

Environment

tzssangglass commented 2 years ago

This is a known issue, same as #7211 and #5755

hansedong commented 2 years ago

@tzssangglass

Thanks for the reply, I have read these 2 issues, and they have some help for me to solve the problem.

However, in the case of APISIX, it did not solve the problem at the source. I plan to change the exporter.lua script to reduce the metrics dataset and see if the problem can be solved. In other words, to solve this problem, users need to change the code to try to solve it, which is not a good way to solve the problem.

The reason I had this problem was actually because I was changing the gateway, from Envoy to APISIX. As far as Envoy is concerned, even when the metrics data is as large as several hundred MB, the response of Envoy will not be slow.

I think, as mentioned in other issues, APISIX's Prometheus plugin could try to provide some customizable options to help people with similar problems.

Thank you very much for your reply, if there is progress, I will reply in this issue.

tzssangglass commented 2 years ago

I plan to change the exporter.lua script to reduce the metrics dataset and see if the problem can be solved. I

Here's what I've seen work so far: reduce the number of metrics that aren't needed. ref: #4273 But no one has claimed it yet, so if you want, you can try it.

xuminwlt commented 2 years ago

Prometheus plugin used in global config, switch enabled true, then it can collector router level metrics. But when i switch it to false, the router level metrics is also in /promethes endpoint, and the number is also huge. What i can do is only restart the apisix service. Is that a expect result?

hansedong commented 2 years ago

@tzssangglass

I did a further follow up on this issue. The root of this problem is on Etcd.

When APISIX nodes establish more connections (such as more than 200) to the same Etcd node, and APISIX communicates with Etcd through TLS certificates, this problem can be reproduced.

This problem leads to 2 points:

  1. When APISIX initiates an HTTP request to Etcd, Etcd's response time will increase significantly (up to tens of seconds).
  2. Because the APISIX Prometheus plugin will query the Modified Index of Etcd in real time every time, the response time of the Prometheus plugin will also increase significantly. Therefore, Prometheus will time out when fetching metrics.

I have created a related issue in the Etcd project and reproduced the problem. Related issues: #7078 https://github.com/etcd-io/etcd/issues/14185

Etcd version 3.5.5 will fix this issue. And, I rebuilt and deployed Etcd based on its fixed PR and tested, and it has been confirmed that this problem can be fixed.

hansedong commented 2 years ago

@xuminwlt In my opinion, there is historical data in the Promethues plugin, and disable the plugin does not solve the problem, and even if the Node of upstream changes, the Prometheus plugin will always keep the historical Node data.

tzssangglass commented 2 years ago

Etcd version 3.5.5 will fix this issue. And, I rebuilt and deployed Etcd based on its fixed PR and tested, and it has been confirmed that this problem can be fixed.

Thank you for your research!

2. Because the APISIX Prometheus plugin will query the Modified Index of Etcd in real time every time, the response time of the Prometheus plugin will also increase significantly. Therefore, Prometheus will time out when fetching metrics.

This was due to too many metrics (tens of thousands), and I had done some optimizations to the upstream nginx-lua-prometheus, but it didn't solve the problem completely.

ref: https://github.com/knyar/nginx-lua-prometheus/pull/139

one idea in the community now is to provide options to control the type of metrics to reduce the total number of metrics, see: https://github.com/apache/apisix/issues/7211#issuecomment-1165669868

zuiyangqingzhou commented 2 years ago

This problem does exist when the amount of metrics data is large, and we found that it also leads to abnormally high CPU usage.

so we modified the prometheus plugin to record only the necessary information, and the streamlined prometheus plugin works well

tokers commented 2 years ago

@zuiyangqingzhou Have you tried the nginx-lua-prometheus optimization introduced by @tzssangglass ?

tzssangglass commented 2 years ago

@zuiyangqingzhou Have you tried the nginx-lua-prometheus optimization introduced by @tzssangglass ?

This optimization is also limited in that some processes cannot be removed, such as sorting tens of thousands of keys, and regular and string splicing. 😅

hansedong commented 2 years ago

@tzssangglass @tokers Sorry for taking so long to reply. The root cause of this problem is Etcd's bug. Etcd's HTTP2-based https connections are limited. The official version 3.5 has not yet been released, but it has been fixed in version 3.4 and a new version has been released. For version 3.5, I have hacked Etcd's source code, recompiled it, and ran it in production stably for nearly a month. This bug of Etcd can refer to: https://github.com/etcd-io/etcd/issues/14185

tokers commented 2 years ago

@tzssangglass @tokers Sorry for taking so long to reply. The root cause of this problem is Etcd's bug. Etcd's HTTP2-based https connections are limited. The official version 3.5 has not yet been released, but it has been fixed in version 3.4 and a new version has been released. For version 3.5, I have hacked Etcd's source code, recompiled it, and ran it in production stably for nearly a month. This bug of Etcd can refer to: etcd-io/etcd#14185

@hansedong Would you like to submit a PR to add this important fact to the FAQ?

hansedong commented 2 years ago

@tokers I'd love to do this, please, how do I add this to the FAQ?

tokers commented 2 years ago

@tokers I'd love to do this, please, how do I add this to the FAQ?

The FAQ page is https://apisix.apache.org/docs/apisix/FAQ/, and you can submit a PR to apisix-website: https://github.com/apache/apisix-website

hansedong commented 2 years ago

@tokers Thanks a lot, I'll give it a try.

hansedong commented 2 years ago

@tokers I'm a little confused, I see that the content of the FAQ page doesn't seem to be in the apache/apisix-website project, but in apache/apisix, specifically https://github.com/apache/apisix/blob/master/docs/en/latest/FAQ.md?

tokers commented 2 years ago

Oops, you're right, that's the correct place.

hansedong commented 2 years ago

@tokers I've added an FAQ item #7906 , can you help review it?