peterk / warcworker

A dockerized, queued high fidelity web archiver based on Squidwarc
GNU General Public License v3.0
55 stars 9 forks source link

Only pulled one page #12

Closed tripleo1 closed 4 months ago

tripleo1 commented 4 years ago
peterk commented 4 years ago
  1. Warcworker is for single page archiving only right now - typically for single posts on SPA websites (social media). There is no crawler or indexer. There are better tools if you want to archive a regular website including crawling.
  2. If you want to monitor logs run docker-compose logs --tail=100 -t -f
tripleo1 commented 4 years ago

Thats an awful lot of work for just one page.

On Sun, Jul 19, 2020 at 9:25 AM Peter Krantz notifications@github.com wrote:

  1. Warcworker is for single page archiving only right now. There is no crawler or indexer.
  2. If you want to monitor logs run docker-compose logs --tail=100 -t -f

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/peterk/warcworker/issues/12#issuecomment-660643439, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL73FMY2ZN3PACFBZBA4SALR4LX6BANCNFSM4PBKJ6DA .

peterk commented 4 years ago

:-) Well, I use it for mass archiving of URLs collected by a custom crawler from js heavy websites. It does its job. For regular archiving see Heritrix. Sorry if it doesn't match your use case. I have updated the README to clarify this for other potential users.

peterk commented 4 years ago

You could check out the archiving component of warcworker - Squidwarc - it has settings that may help you in archiving more links of a website (see Page + Same Domain Links setting).

tripleo1 commented 4 years ago

Thanks. Was just looking at that.

On Sun, Jul 19, 2020 at 4:47 PM Peter Krantz notifications@github.com wrote:

You could check out the archiving component of warcworker - Squidwarc https://github.com/N0taN3rd/Squidwarc - it has settings that may help you in archiving more links of a website (see Page + Same Domain Links setting).

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/peterk/warcworker/issues/12#issuecomment-660706602, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL73FM7YNTRXNDUZOQEBM23R4NLVRANCNFSM4PBKJ6DA .

peterk commented 4 years ago

If I remember correctly it only captures the current page and all the links from that page so it will not capture an entire website. If the website you are archiving is not dependant on running scripts in the archiving tool you could check out HTTRack as well.

peterk commented 4 months ago

Closing