Closed tripleo1 closed 4 months ago
docker-compose logs --tail=100 -t -f
Thats an awful lot of work for just one page.
On Sun, Jul 19, 2020 at 9:25 AM Peter Krantz notifications@github.com wrote:
- Warcworker is for single page archiving only right now. There is no crawler or indexer.
- If you want to monitor logs run docker-compose logs --tail=100 -t -f
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/peterk/warcworker/issues/12#issuecomment-660643439, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL73FMY2ZN3PACFBZBA4SALR4LX6BANCNFSM4PBKJ6DA .
:-) Well, I use it for mass archiving of URLs collected by a custom crawler from js heavy websites. It does its job. For regular archiving see Heritrix. Sorry if it doesn't match your use case. I have updated the README to clarify this for other potential users.
You could check out the archiving component of warcworker - Squidwarc - it has settings that may help you in archiving more links of a website (see Page + Same Domain Links setting).
Thanks. Was just looking at that.
On Sun, Jul 19, 2020 at 4:47 PM Peter Krantz notifications@github.com wrote:
You could check out the archiving component of warcworker - Squidwarc https://github.com/N0taN3rd/Squidwarc - it has settings that may help you in archiving more links of a website (see Page + Same Domain Links setting).
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/peterk/warcworker/issues/12#issuecomment-660706602, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL73FM7YNTRXNDUZOQEBM23R4NLVRANCNFSM4PBKJ6DA .
If I remember correctly it only captures the current page and all the links from that page so it will not capture an entire website. If the website you are archiving is not dependant on running scripts in the archiving tool you could check out HTTRack as well.
Closing