Open rafaelpirolla opened 4 years ago
Yes, we are aware that some fetches are done at per config entry level, which need not be done. But as per current implementation, it has dependancy on features/config levels. We are working to provide easy stat level access(to avoid per config entry fetch)
Cool. I'm looking forward to this too. To throw in a data-point, our scrape is taking about ~30s (quite variable, I'm told 15-20 seconds is more the norm, but load varies...), and results in about 27,000 lines in response.
We're not using our VPX's as a Kubernetes Cluster Ingress Controller, but more a regular layer-7 load-balancer. We have VPXs in our datacentre, our DMZ, and for various significant groups, in either HA mode or as GSLB. In total, we would have about about 14 or so VPXs, but at the moment my attention is solely on about four of these (and even then only on a small number of the LBVS). The SDXs are currently out-of-scope for my currently monitoring.
There are currently over 400 LBVS on one active/passive HA pair of our VPXs.
I haven't compared to querying with SNMP (not sure I want to).
As on optimization, it would perhaps be useful to provide a filtering mechanism of which LBVS etc. to retrieve, although the naming conventions that have been used around here could generously be described as 'organic', although a regex could still be useful.
Depending on the amount of content switches and virtual services the scraping can go up to ~30s. Maybe there are some easy optimisations that could be made to speed things up?