yasserg / crawler4j

Open Source Web Crawler for Java
Apache License 2.0
4.53k stars 1.93k forks source link

How to count the mount of urls and collect urls? #111

Open Mr-Vincent opened 8 years ago

Mr-Vincent commented 8 years ago

I can override the method "onBeforeExit()" to collect urls the program crawled.But the WebCrawler instance will destoryed when the current thread was dead and the variable for saving url list defined in subclass of WebCrawler would be reset so I cannot collect the urls that the program crawled. Could u give me some demos to resolve my puzzle? BTW,I am a java leaner,thx 4 providing such a great open-source project!

shinbuiev commented 8 years ago

public class MyCrawler extends WebCrawler { public static AtomicInteger linkCounter = new AtomicInteger(); //or use links.size() static List links = new CopyOnWriteArrayList(); ... } @Override public void handlePageStatusCode(final WebURL webUrl, int statusCode, String statusDescription) { linkCounter.incrementAndGet(); links.add(webUrl); }

so you can count the amount of urls and collect them to collection.

Chaiavi commented 8 years ago

Thanks.

This one can be closed

On Fri, Jan 15, 2016 at 1:33 PM, shinbuiev notifications@github.com wrote:

public class MyCrawler extends WebCrawler { public static AtomicInteger linkCounter = new AtomicInteger(); //or use links.size() static List links = new CopyOnWriteArrayList(); ... } @Override https://github.com/Override public void handlePageStatusCode(final WebURL webUrl, int statusCode, String statusDescription) { linkCounter.incrementAndGet(); links.add(webUrl); }

so you can count the amount of urls and collect them to collection.

— Reply to this email directly or view it on GitHub https://github.com/yasserg/crawler4j/issues/111#issuecomment-171938857.