Closed GoogleCodeExporter closed 9 years ago
crawler4j is designed to crawl many pages using a multi-threaded approach. So,
before shutting down immediately it has to wait for about one minute to make
sure that all threads are done with their work. If you only want to crawl a
limited number of pages and you know the exact URLs in advance, then I suggest
to use the downloader example:
http://code.google.com/p/crawler4j/source/browse/src/test/java/edu/uci/ics/crawl
er4j/examples/localdata/Downloader.java
-Yasser
Original comment by ganjisaffar@gmail.com
on 3 Jan 2012 at 7:36
Thank you Yasser!
Original comment by afterbit...@gmail.com
on 5 Jan 2012 at 9:30
Original issue reported on code.google.com by
afterbit...@gmail.com
on 3 Jan 2012 at 11:14