sawantuday / crawler4j

Automatically exported from code.google.com/p/crawler4j
0 stars 0 forks source link

While crawling web site, it suddenly stops by giving exception #182

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Try to fetch a site
2. After some minute, it gives exception and stops working

What is the expected output? What do you see instead?
Framework should work properly but it gives "java.lang.IllegalStateException: 
Can't call Database.sync: Database was closed." exception.

What version of the product are you using?
3.3

Please provide any additional information below.
My configuration is 10 threads to fetch. You can find full detail exception

java.lang.IllegalStateException: Can't call Database.sync: Database was closed.
    at com.sleepycat.je.Database.checkOpen(Database.java:1751)
    at com.sleepycat.je.Database.sync(Database.java:487)
    at edu.uci.ics.crawler4j.frontier.WorkQueues.sync(WorkQueues.java:171)
    at edu.uci.ics.crawler4j.frontier.Frontier.sync(Frontier.java:182)
    at edu.uci.ics.crawler4j.frontier.Frontier.close(Frontier.java:192)
    at edu.uci.ics.crawler4j.crawler.CrawlController$1.run(CrawlController.java:232)
    at java.lang.Thread.run(Thread.java:680)

Original issue reported on code.google.com by kocamane...@gmail.com on 9 Dec 2012 at 12:54

GoogleCodeExporter commented 9 years ago
Do you happen to have the following in your logs?

Caused by: java.lang.OutOfMemoryError: Java heap space

Original comment by roc...@gmail.com on 28 Jun 2013 at 6:58

GoogleCodeExporter commented 9 years ago
Not a bug or feature request

Original comment by avrah...@gmail.com on 11 Aug 2014 at 2:18