rajatomar788 / pywebcopy

Locally saves webpages to your hard disk with images, css, js & links as is.
https://rajatomar788.github.io/pywebcopy/
Other
527 stars 106 forks source link

How to limit the crawling depth? #62

Open cfytrok opened 3 years ago

cfytrok commented 3 years ago

It looks like the scan_level parameter was responsible for this earlier. But it has been deprecated. What can I do to limit depth of crawl?

rajatomar788 commented 3 years ago

The crawl length feature was removed as it was incompatible with new algorithm. You can try to subclass the crawler class and see what happens.

BradKML commented 1 year ago

What does that mean @rajatomar788 since getting the whole website like wget is the main feature