By calculating how many requests we need and evenly spreading those
requests out over the first half of the send interval (so we still send
the metrics fairly quickly), we should reduce the risk of throttling due
to bursting the requests to quickly after one another.
By calculating how many requests we need and evenly spreading those requests out over the first half of the send interval (so we still send the metrics fairly quickly), we should reduce the risk of throttling due to bursting the requests to quickly after one another.