openzim / zimit

Make a ZIM file from any Web site and surf offline!
GNU General Public License v3.0
328 stars 24 forks source link

How to perform incremental scrapping of websites ? #344

Open nish2482 opened 1 month ago

nish2482 commented 1 month ago

I find that for scrapping a mediawiki site,zimit takes 5-6 hours , is there some recommended setting for scraping a mediawiki site for zimit? To reduce scrapping time how can we do delta scrapping using zimit so that we just scrap the changed web pages and add it to the original zim ?

benoit74 commented 1 month ago

Problem with mediawikis is that all revision pages are grabbed one by one. This is probably not something you're interested in, you can probably set an exclude parameter to exclude revision history URLs (never tried, but should work). It is also important to note that in order to ZIM a mediawiki it is preferable to use mwoflliner scraper which is specifically tailored to ZIM a mediawiki.

That being said, the problem of incrementally scrapping a site is still relevant for many other cases, and for now there is no real solution in place. And it is probably not going to be something straightforward to implement.

kelson42 commented 1 month ago

Scraping of Mefiawiki sites is recommended to be done via MWoffliner.