Open Mr-Vincent opened 8 years ago
public class MyCrawler extends WebCrawler {
public static AtomicInteger linkCounter = new AtomicInteger(); //or use links.size()
static List
so you can count the amount of urls and collect them to collection.
Thanks.
This one can be closed
On Fri, Jan 15, 2016 at 1:33 PM, shinbuiev notifications@github.com wrote:
public class MyCrawler extends WebCrawler { public static AtomicInteger linkCounter = new AtomicInteger(); //or use links.size() static List links = new CopyOnWriteArrayList(); ... } @Override https://github.com/Override public void handlePageStatusCode(final WebURL webUrl, int statusCode, String statusDescription) { linkCounter.incrementAndGet(); links.add(webUrl); }
so you can count the amount of urls and collect them to collection.
— Reply to this email directly or view it on GitHub https://github.com/yasserg/crawler4j/issues/111#issuecomment-171938857.
I can override the method "onBeforeExit()" to collect urls the program crawled.But the WebCrawler instance will destoryed when the current thread was dead and the variable for saving url list defined in subclass of WebCrawler would be reset so I cannot collect the urls that the program crawled. Could u give me some demos to resolve my puzzle? BTW,I am a java leaner,thx 4 providing such a great open-source project!