joopies / crawler4j

Automatically exported from code.google.com/p/crawler4j
0 stars 0 forks source link

Hanging on file process #336

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. set max size to anything reasonable, im using 1MB
2. start crawling from 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=S%3BO%3DA
3. watch console

What is the expected output? What do you see instead?
Should keep crawling

Hangs after trying to pageProcess(i think its there) normal.bam and/or tumor.bam

What version of the product are you using?
4.0

Please provide any additional information below.

Original issue reported on code.google.com by Dave.Hir...@gmail.com on 29 Jan 2015 at 8:01

GoogleCodeExporter commented 9 years ago
ok, I tried your scenario but it works for me.

You are right that it skips both of those pages as they are bigger than 4mb but 
it does continue...

Please try crawling with logger configured to show DEBUG logs, do you see 
something additional ?

Here are my logs:
19:24:47 INFO  [main] - [CrawlController]- Crawler 1 started
19:24:47 INFO  [main] - [CrawlController]- Crawler 2 started
19:24:47 INFO  [main] - [CrawlController]- Crawler 3 started
19:24:49 INFO  [Crawler 1] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=S%3BO%3DA
19:24:49 INFO  [Crawler 2] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/
19:24:50 DEBUG [Crawler 2] - [WebCrawler]- Skipping: 
http://www.ics.uci.edu/icons/unknown.gif as it contains binary content which 
you configured not to crawl
19:24:50 WARN  [Crawler 2] - [WebCrawler]- Skipping a URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai which was bigger ( 
4523128 ) than max allowed size
19:24:50 INFO  [Crawler 3] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/
19:24:51 WARN  [Crawler 2] - [WebCrawler]- Skipping a URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai which was bigger 
( 4534848 ) than max allowed size
19:24:51 INFO  [Crawler 3] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/?C=D%3BO%3DA
19:24:51 INFO  [Crawler 3] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/?C=N%3BO%3DD
19:24:51 INFO  [Crawler 2] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=N%3BO%3DD
19:24:52 DEBUG [Crawler 2] - [WebCrawler]- Skipping: 
http://www.ics.uci.edu/icons/back.gif as it contains binary content which you 
configured not to crawl
19:24:52 INFO  [Crawler 3] - [WebCrawler]- URL: http://www.ics.uci.edu/~yil8/

And it continues on and on...

Original comment by avrah...@gmail.com on 29 Jan 2015 at 5:27

GoogleCodeExporter commented 9 years ago
Ah, i was running 1 crawler instance, not 3.
Also i was using
@Override
public WebURL handleUrlBeforeProcess(WebURL curURL) {
    System.out.println("handling " +curURL.getURL());
      return curURL;
}

2015-01-28 23:56:14,063 INFO  [main] - 
[edu.uci.ics.crawler4j.crawler.CrawlController] - Crawler 1 started
2015-01-28 23:56:14,516 INFO  [Crawler 1] - 
[edu.uci.ics.crawler4j.crawler.WebCrawler] - 1 URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=S%3BO%3DA
2015-01-28 23:56:14,626 INFO  [Crawler 1] - 
[edu.uci.ics.crawler4j.crawler.WebCrawler] - 2 URL: 
http://www.ics.uci.edu/~yil8/public_data/
2015-01-28 23:56:14,896 WARN  [Crawler 1] - 
[edu.uci.ics.crawler4j.fetcher.PageFetcher] - Failed: Page Size (4523128) 
exceeded max-download-size (1048576), at URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai
2015-01-28 23:56:14,896 WARN  [Crawler 1] - 
[edu.uci.ics.crawler4j.crawler.WebCrawler] - Skipping a page which was bigger 
than max allowed size: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai
2015-01-28 23:56:15,302 WARN  [Crawler 1] - 
[edu.uci.ics.crawler4j.fetcher.PageFetcher] - Failed: Page Size (4534848) 
exceeded max-download-size (1048576), at URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai
2015-01-28 23:56:15,302 WARN  [Crawler 1] - 
[edu.uci.ics.crawler4j.crawler.WebCrawler] - Skipping a page which was bigger 
than max allowed size: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai

(obviously bad technique i didnt use the logger for preProcess.... but from 
what i recall last line was: "handling 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam" or tumor.bam
they are huge files, it was hanging for at least 45 minutes until i stopped it

(I circumvented .bam files for now, while i am crawling, but ill see if i can 
get a better log once my current crawl is done.)

Original comment by Dave.Hir...@gmail.com on 29 Jan 2015 at 7:04

GoogleCodeExporter commented 9 years ago
I found the problem, It got caught on my trap avoidance algorithm.
It was pointing back to the same page with some links, with different urls.

Original comment by Dave.Hir...@gmail.com on 29 Jan 2015 at 8:30

GoogleCodeExporter commented 9 years ago
Ok.

Thank you for the report though.

Original comment by avrah...@gmail.com on 2 Feb 2015 at 10:56