hjr3 / soldr

Other
23 stars 2 forks source link

Retry failed requests per host/origin #50

Open manpreeeeeet opened 10 months ago

manpreeeeeet commented 10 months ago
pub async fn list_failed_requests(pool: &SqlitePool) -> Result<Vec<QueuedRequest>> {
    tracing::trace!("list_failed_requests");
    let mut conn = pool.acquire().await?;

    // FIXME - we currently tick the retry queue every second, so this effectively gives a
    // rate limit of 5 requests per second. This should probably be configurable on a per-origin
    // basis.
    let query = r#"
    SELECT *
    FROM requests
    WHERE state IN (?, ?, ?, ?)
        AND retry_ms_at <= strftime('%s','now') || substr(strftime('%f','now'), 4)
    ORDER BY retry_ms_at ASC
    LIMIT 5;
    "#;...

Currently we only get the top 5 failed requests, that should be retried earliest. This means if a certain origin/domain had lots of failing requests at same time, it would delay failing requests from other origins/domains from getting retried. I suggest we fetch x requests per domain/origin that should be retried at earliest to make the system be fair. What are your thoughts?

hjr3 commented 9 months ago

Agreed. That is what my comment

This should probably be configurable on a per-origin basis

is implying. As I think about this, maybe per origin configuration is less important than updating this logic to be per origin.

manpreeeeeet commented 8 months ago

I can try exploring this if you haven't already started.

hjr3 commented 8 months ago

@manpreeeeeet you can give it a go!