Closed cormip closed 4 years ago
After you have finished awaiting scraper.scrape(), call scraper.destroy(), and create your new instance.
Great! That worked. Thanks
I've uploaded a new version(3.1.0). There's no need to call scraper.destroy() anymore.
I have a non-sequential list of URLs from the same site that I wish to scrape, but I don't see an obvious/optimal way to do it. For example:
http://www.example.com/page?id=73 http://www.example.com/page?id=129 http://www.example.com/page?id=247 http://www.example.com/page?id=341 etc...
How would you suggest the best way to accomplish this would be given that
Scraper can have only one instance
?