Open Sandwich1699975 opened 2 weeks ago
Queues should be pulled to python script and the speed test should ideally be stored in the background in a CSV file so the scrape is quick
An ideal solution would be to query Grafana Cloud and check values within a threshold of around $2 \times$ scrape_interval
and see how many unique matches of origin_prometheus
there are. Then you have the number of clients. Then you can just set the timeout for clients to be a hour times the amount of devices found from that scrape. No need for scheduling and queues just yet. I think this will work because they are counted as 'on demand'
As suggested by Ookla, you should restrict calls to Ookla to one per hour per IP address to avoid being rate limited.
There should be a way to store the timestamp of the last Speedtest call and queue calls between clients on the same IP address. Perhaps this is best stored on Grafana and pulled by the Python Prometheus script? This should edit this line to correctly set the timestamp.