Closed bluemont closed 12 years ago
@bluemont Correct, right now, wombat definitely would be better classified as a scraper, and not as a crawler, since it is not able to navigate through pages. I have plans to add some sort of crawling functionality in the future, but for the time being, I am gonna update the documentation to make it clearer. Thanks for the feedback!
Just released version 2.0.0 which addresses this issue mainly via these 2 changes:
Crawler#scrape
that aliases to Crawler#crawl
;:follow
option in a property. Gonna add some documentation on how it works soon. For now, it will only crawl 1 level deep by default, which means it won't keep clicking links. Next release will add the ability to specify a custom depth.I think these changes addresses the issue. Closing it.
@felipecsl was depth control ever added?
@StephenOTT not yet. It sounds like it could use that feature though. I'll see if I can get it implemented for next version
I am looking to recreate the recursive functionality that FMiner provides: http://www.fminer.com
The Ability to CSS or Xpath some links on the page and continue to follow/iterate through each of the pages. On each page the user should be able to capture information. Common examples of pages with "Next" Buttons or Hierarchical based data structures that are spread over multiple pages.
From what I can tell, wombat isn't a crawler. Is this correct?
Web scraping, to use a minimal definition, is the process of processing a web document and extracting information out of it. You can do web scraping without doing web crawling. Wikipedia on web scraping
Web crawling, to use a minimal definition, is the process of iteratively finding and fetching web links starting from a list of seed URL's. Strictly speaking, to do web crawling, you have to do some degree of web scraping (to extract the URL's.) Wikipedia on web crawlers