lycheeverse / lychee

⚑ Fast, async, stream-based link checker written in Rust. Finds broken URLs and mail addresses inside Markdown, HTML, reStructuredText, websites and more!
https://lychee.cli.rs
Apache License 2.0
2.18k stars 132 forks source link

Add custom delay inbetween requests (prevent ban) #989

Open tooomm opened 1 year ago

tooomm commented 1 year ago
lychee v0.10.3

I'm seeing many [429] ... | Failed: Network error: Too Many Requests errors.

The issue is that the links we want to check are actually queries to an API which then return URLs again. We are doing well over 1000 requests.

From their docs:

Rate Limits and Good Citizenship

We kindly ask that you insert 50 – 100 milliseconds of delay between the requests you send to the server

Submitting excessive requests to the server may result in a HTTP 429 Too Many Requests status code. Continuing to overload the API after this point may result in a temporary or permanent ban of your IP address.

Is there any default delay? Could not find information on that.

The lychee docs list various mitigations to circumvent rate limits: https://lychee.cli.rs/#/troubleshooting/rate-limits?id=rate-limits But none really help here in order to prevent getting banned.

mre commented 1 year ago

There is no delay between the requests at the moment. In fact, we run many requests concurrently; even to the same host.

We could add a --request-delay argument and in combination with --max-concurrency 1 it would resolve your issue, but it would just be a band-aid to the underlying problem, which is the rate limiting itself. You see, delays need to be tweaked by humans to find a sweet spot between throughput and stability. This can't be solved with a global delay.

Instead, my proposal is to add better rate-limiting support per website. I wrote https://github.com/mre/rate-limits a while ago and would like to integrate it into lychee. We would keep track of the current rate-limits for each host in a HashMap (a concurrent HashMap actually) or maybe even create one reqwest client per host; I don't know which one is the better option right now.

E.g. the hash map could look like this:

use rate_limits::ResetTime;
use std::collections::HashMap;
use time::OffsetDateTime;

let mut rate_limits = HashMap::new();
rate_limits.insert(
    "github.com".to_string(),
    ResetTime::DateTime(OffsetDateTime::from_unix_timestamp(1350085394).unwrap()),
);

An entry would be inserted once we get rate-limited. Before every request, we would check if the current time is after the ResetTime. If not, we'd wait the difference and finally remove the rate limit entry from the map. This would scale much better than a global delay and would cover more use-cases.

What do you think?

tooomm commented 1 year ago

You are absolutely right that the needed delay would need to be tweaked to fit all queried hosts... and there is likely no common ground. And of course concurrency needs to be set to 1 and everything would be slowed down.

Your proposal sounds like a pretty smart solution! How common is the usage of this headers already? I see that the newest IETF document also covers APIs, but everything is still in draft?

mre commented 1 year ago

It is pretty common for APIs, but not for websites I would guess. Realistically we might still need both, the rate-limit headers and a way to configure the delay.

Let's start with rate-limit headers, though, because that's a common way for a website to tell us to slow down. Another common way is the infamous 429. We currently retry those requests with an exponential backoff, which is a start. (We could do better by estimating an optimal request interval based on the response time with various request delays, but let's not get ahead of ourselves.)

kdeldycke commented 1 year ago

Instead, my proposal is to add better rate-limiting support per website.

πŸ‘†This!

Checking my awesome-falsehood project for dead links reveals some false positives for news.ycombinator.com and twitter.com domains. Of course these two protects themselves from abuse, and concurrent access by Lychee are seen as such.

--max-concurrency 1 solves the issue.

But we forfeit any performance. The ideal solution would be a way to have either --max-concurrency-per-domain 1 or --delay-per-domain 1.5s option.

mre commented 1 year ago

Did you manage to check twitter links lately? It's failing on my end, even with our workaround to use nitter instead. Maybe the concurrency is what's killing it for me.

Haven't encountered any issues with HN yet, even though it's probably a matter of not triggering their rate-limiting. Out of curiosity, how many requests to news.ycombinator.com does it take until you encounter any issues?

kdeldycke commented 1 year ago

Did you manage to check twitter links lately?

It's more complicated than that: --max-concurrency 1 fixed it from my machine. But it doesn't from my GitHub action workflows. So there is hard rate-limiting by Twitter from requests originating from GitHub.

how many requests to news.ycombinator.com does it take until you encounter any issues?

Around 4 request:

Screenshot 2023-06-12 at 19 06 27

Source: https://github.com/kdeldycke/awesome-falsehood/issues/159

lfrancke commented 10 months ago

I'd like a feature like rate-limit in muffet which applies across all requests.

I'm happy to offer a bounty of sorts of 100€ (payable via PayPal or SEPA) for whoever implements this, if multiple people work on it I'm happy to split the money.

mre commented 10 months ago

That's great to hear! To whoever might be interested in tackling that, feel free to post a comment here. Generally, we can follow the approach of muffet that was linked above.

nathany commented 7 months ago

Seeing this a lot from GitHub today -- I don't think it was happening the other day. Even with --max-concurrency 32.

e.g.

βœ— [429] https://github.com/markdown-it/markdown-it-abbr | Failed: Network error: Too Many Requests

There are about 7000 links in the repository I'm scanning for broken links, and a lot of the links are hitting GitHub, so getting rate limited is no surprise. --max-concurrency 1 is working so far, but it's gonna take a long time.

mre commented 7 months ago

Random thoughts:

I didn't encounter any errors today... yet. At least by the end of the day I should also be affected. Not aware of any changes on GitHub yet. Hope that helps. πŸ˜…

nathany commented 7 months ago

@mre Doh. I was running lychee from my local computer against the markdown files in an open source GitHub project. I didn't see the GitHub Token option in the README/help. I will try that.

That works much better. Sorry for not reading the readme 🀦🏼

I'm still seeing a few rate limits even with the GitHub token (far fewer, but it not zero). This is without a max-concurrency option:

[proposals/p3646.md]:
βœ— [429] https://github.com/rust-lang/rust/issues/60210 | Failed: Network error: Too Many Requests
βœ— [429] https://github.com/intellij-rust/intellij-rust/commit/f82f6cd68567e574bf1e30f5e0d263ee15d1d36e | Failed: Network error: Too Many Requests
βœ— [429] https://github.com/rust-lang/rust/pull/71322 | Failed: Network error: Too Many Requests

When it hits the rate limit, does it give up, so I need to manually check those links?

I could just add --accept 429, but that's not entirely ideal.

mre commented 7 months ago

Yes, the output is the final status output of lychee. There will be no further checks after that. That means you will have to check manually, try again later, reduce the number of concurrent requests, or "shard" the requests (split them up into groups). You can do the latter by e.g. using two lychee runs.

# Throttle GitHub check
lychee --include github --max-concurrency 20 <other settings>

# Don't throttle the rest
lychee --exclude github <other settings>

In the future, I hope that we can fix that once and for all with per-host rate-limiting.