lycheeverse / lychee

⚡ Fast, async, stream-based link checker written in Rust. Finds broken URLs and mail addresses inside Markdown, HTML, reStructuredText, websites and more!
https://lychee.cli.rs
Apache License 2.0
2.18k stars 132 forks source link

Add recursive option #78

Open styfle opened 3 years ago

styfle commented 3 years ago

It would be nice to pass a URL and have it crawl the entire website recursively looking for dead links.

In order to avoid crawling the entire internet, it should stop recursing once a request no longer matches the original domain.

mre commented 3 years ago

Yeah it has been discussed a few times already and it's high on the todo-list. Will require quite some restructuring I guess. Right now the flow is

main -> extractor -> channel -> client (link checker) -> main

but there is no connection back to the extractor.

frederickjh commented 3 years ago

The lack of recursive spidering make this project unusable for my purpose of checking all links internal and external on a website. I am trying to find a replacement for michaeltelford/broken_link_finder . It is written in Ruby and with out super user access a the new place where this will run it is impossible to install. I am looking for a "portable" replacement. I worked with michaeltelford to get his project into a much more usable state. Check out that projects issue queue for some of the reasoning that went into the development.

In any case regarding this issue. The spidering should stop with links found on the current domain, but the links found to external sources should still be checked.

mre commented 3 years ago

165 is getting very close to completion. It implements the functionality described. If you like to support, please build the version from that branch and test it. Feedback on the pull request is appreciated.

frederickjh commented 3 years ago

@mre I am new to rust but it seemed pretty straight forward as to how to build from Working on an Existing Cargo Package. However I have run into an issue. At first I thought it was a credential issue but it looks like a 404 issue.

image

Caused by: Unable to update https://github.com/amaurym/async-smtp?branch=am-fast-socks#eac57391

That GitHub URL returns a 404 so I am not sure how to proceed to build.

Rust information:

$ rustc --version
rustc 1.50.0 (cb75ad5db 2021-02-10)
$ rustup --version
rustup 1.23.1 (3df2264a9 2020-11-30)
info: This is the version for the rustup toolchain manager, not the rustc compiler.
info: The currently active `rustc` version is `rustc 1.50.0 (cb75ad5db 2021-02-10)`
$ cargo --version
cargo 1.50.0 (f04e7fab7 2021-02-04)

The GitHub repository amaurym/async-smtp does not seem to exist anymore. Not sure how to proceed. Please advise. Thanks!

frederickjh commented 3 years ago

I dug through the Cargo.lock file for reacherhq/check-if-email-exists and found this line with the source for async-smtp

source = "git+https://github.com/async-email/async-smtp?branch=master#0f1c4c6a565833f8c7fc314de84c4cbbc8da2b4a"

So it looks like the source for async-smtp has moved to async-email/async-smtp.

frederickjh commented 3 years ago

Just found #189 which is the same issue I reported here as to why the build fails.

mre commented 3 years ago

Yeah. Related:

We are blocked by upstream at the moment. 😕

frederickjh commented 3 years ago

Ah, upstream reacherhq/check-if-email-exists updated to use upstream instead of his fork three days ago. I am guessing he also deleted the fork then, but this project is still using it.

See here: chore: Update wording around licenses #892

This repository still has references to his fork that no longer exists on both the master and simple-recursion branches in the Cargo.lock files. There is a comment in the Cargo.toml file that says:

# Switch back to version on crates.io after
# https://github.com/async-email/async-smtp/pull/36
# is merged and a new version of check-if-email-exists is released

So it looks like pull 36 in the upstream is closed but the new crate has not been published as the newest one is dated January 10.

@mre let me know if there is any movement on this and I will then try to build from the simple-recursion branch and test.

frederickjh commented 3 years ago

Upstream is reporting:

This is fixed in 0.8.21

but I still cannot bulid from the simple-recursion branch, so I think something needs work there too be for this will build.

frederickjh commented 3 years ago

So, I think that the version of async-email/async-smtp needs to be upgraded from 0.8.19 to 0.8.21 for the simple-recursion branch to build.

mre commented 3 years ago

Thanks for the info. I'll tackle that once #208 is merged. 😄

frederickjh commented 3 years ago

@mre I see that #208 got merged back in April. Let me know if you get this branch to the point where it will build and I can then test it.

frederickjh commented 3 years ago

@mre I am still willing to test this but I will be finished with my current job in second week of June and may not have a need for it after that for a while. I would like to get this setup to replace the current program that we are using to check for broken links that I cannot easily move to a shared server because it is ruby. Let me know if you get the version of simple-recursion changed so that this branch builds and I will test it.

mre commented 3 years ago

Thanks for your patience. Want to work on this as soon as I find the time. No guarantees this will be soon, though. 😅

frederickjh commented 3 years ago

@mre Patience I have, but time is running out. I finish at my current work place on June 9. I had hope to use this to replace a Ruby broken link checker that is running on an in-house server I need to decommission. I need something I can run on shared hosting with out needing to install a bunch of dependencies I don't have permission to do so. So, if you could find a little time this week to get the branch in a shape that it will build, I could build and test it .

untitaker commented 2 years ago

I found that muffet and linkcheck serve the recursive usecase the best right now, and particularly muffet is very fast at this. What neither does is to opportunistically check /sitemap.xml to traverse the site faster/get to efficient parallelization faster. Lychee could one-up them on performance if that is done by default.

mre commented 2 years ago

New PR which tackles this: #465 Will probably go through another round of refactoring before it's ready, but I'm on it.

cipriancraciun commented 2 years ago

I'm unsure how this is actually implemented, perhaps what I am about to say is already covered, so sorry for the duplication.

Recursion is also very important to me, but I would like to allow the user to specify a list of origins (scheme+host+port) to allow recursion for, or a list of regular expressions.

Say for example one has both a www and a blog, site but also docs site, one would like to primarily check the www and blog (thus specifying them as arguments), but also to recursively check everything that links towards the docs site and other pages from that on any of the three.

mre commented 2 years ago

Good point. It's not implemented and wasn't mentioned before.

The way I envisioned it was that all links which belong to the same input URI would be followed recursively, while the rest would not. So you could do lychee --recursive www blog docs, but it sounds like you only want to check that links pointing to docs, but not all of docs. I wonder what's the issue to check all links in docs, though. Is the site too big? If you want to exclude some URI patterns for docs, you could do lychee --exclude docs/foo --recursive www blog docs.

cipriancraciun commented 2 years ago

So you could do lychee --recursive www blog docs, but it sounds like you only want to check that links pointing to docs, but not all of docs. I wonder what's the issue to check all links in docs, though. Is the site too big?

Imagine that instead of docs there is actually an assets domain, that might not even have an index to start from; however this assets domain could contain some HTML files that are perhaps included in <iframe> inside www or blog, and some of these HTML files are somewhat self-contained, thus starting from them one wouldn't reach the entire assets collection. Now if one of these HTML files actually contains broken links, that could affect the initial www and blog sites.

Or another example, that extra domain could be something hosting examples HTMLs that might be linked from the main site, and one would like to make sure that every example works as expected.


Or if the above reasons don't seem convincing enough (granted they are quite extreme), I assume that inside the code there already exists a set of "allowed" domains or origins for recursion, that are filled in at startup based on the starting links; thus allowing the user to manipulate that wouldn't be much of a burden, but will also increase flexibility.

mre commented 2 years ago

Hm... the main question is always how to wire that up in the CLI without provoking additional mental overhead. Following our --exclude/--include patterns, we could add an --include-recursive parameter:

lychee --recursive --include-recursive docs -- www blog docs
frittentheke commented 1 year ago

@mre could you maybe give an update on support for traversing / recursive link checking to tackle a whole website? From what I could find https://github.com/lycheeverse/lychee/pull/465 was the most recent attempt to get the design for this feature down?

mre commented 1 year ago

Yes, sure.

There were a few attempts, but there were always issues with the design. It's a feature which touches on almost all parts of the code and we have to get this right.

I'd love to dedicate more time to it, but it's hard to add that feature next to other responsibilities. Currently looking into companies who might be willing to sponsor that feature as I guess it will be quite some work, but it would have a very positive impact to the usefulness for all users. I know that there are companies out there which would really like to have that, but so far there hasn't been a lot of traction with regards to sponsoring. My hope is to still get the free time to work on it at some point, but I wouldn't hold my breath for it right now unless there's a way to fund this. In the meantime, I encourage others to take a stab at it as well.

styfle commented 1 year ago

I'll close this since I already built a solution.

https://github.com/styfle/links-awakening

https://www.npmjs.com/package/links-awakening

mre commented 1 year ago

Nice package. I would still like to keep the issue open, as I'd like to add recursion support to lychee at some point as well.

Alseenrodelap commented 11 months ago

Still no recursive option in the main branch since 2020? I'm trying to run this great program via Docker but really miss the recursive option...

lfrancke commented 10 months ago

I'm happy to offer a bounty of sorts of 100€ (payable via PayPal or SEPA) payment for whoever implements this, if multiple people work on it I'm happy to split the money.

I know this won't cover the whole development of this feature.