graphaware / neo4j-framework

GraphAware Neo4j Framework
244 stars 68 forks source link

Lucene.alreadyClosedException while adding labels with tx-executor #37

Closed ikwattro closed 8 years ago

ikwattro commented 8 years ago

@bachmanm Not sure if it is related to the framework or through neo4j/lucene

_Scenario : _

Using IterableBatchTransactionExecutor to add labels to AllNodesWithLabel.

Failing test :

@Test
    public void testLabelsCanBeAddedInBatch() {
        BatchTransactionExecutor batchExecutor = new NoInputBatchTransactionExecutor(database, 1000, 2000000, new UnitOfWork<NullItem>() {
            @Override
            public void execute(GraphDatabaseService database, NullItem input, int batchNumber, int stepNumber) {
                Node node = database.createNode();
                node.addLabel(DynamicLabel.label("FirstLabel"));
            }
        });
        batchExecutor.execute();

        IterableInputBatchTransactionExecutor executor = new IterableInputBatchTransactionExecutor<Node>(database, 1000,
                new AllNodesWithLabel(database, 1000, DynamicLabel.label("FirstLabel")),
                new UnitOfWork<Node>() {
                    @Override
                    public void execute(GraphDatabaseService database, Node node, int batchNumber, int stepNumber) {
                        node.addLabel(DynamicLabel.label("SecondLabel"));
                    }
                });
        executor.execute();

        AtomicInteger i = new AtomicInteger(0);
        try (Transaction tx = database.beginTx()) {
            ResourceIterator<Node> nodes = database.findNodes(DynamicLabel.label("SecondLabel"));
            while (nodes.hasNext()) {
                i.incrementAndGet();
                nodes.next();
            }

            tx.success();
        }

        assertEquals(2000000, i.get());
    }

Test available in a dedicated branch : https://github.com/graphaware/neo4j-framework/blob/labels-ibtx/tx-executor/src/test/java/com/graphaware/tx/executor/batch/IterableInputAddingLabelsTest.java

Note that with 2k, 20k, or 200k nodes it doesn't throw the error.

Throwing :

2015-11-23 19:42:50.985+0100 WARN  Exception while producing input!
org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
    at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:245) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.index.IndexReader.getSequentialSubReaders(IndexReader.java:1602) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:78) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.search.TermQuery$TermWeight.<init>(TermQuery.java:53) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:319) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:305) ~[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40]
    at org.neo4j.kernel.api.impl.index.PageOfRangesIterator.fetchNextOrNull(PageOfRangesIterator.java:67) ~[neo4j-lucene-index-2.3.1.jar:2.3.1]
    at org.neo4j.kernel.api.impl.index.PageOfRangesIterator.fetchNextOrNull(PageOfRangesIterator.java:35) ~[neo4j-lucene-index-2.3.1.jar:2.3.1]
    at org.neo4j.helpers.collection.PrefetchingIterator.peek(PrefetchingIterator.java:60) ~[neo4j-kernel-2.3.1.jar:2.3.1,dd67f90]
    at org.neo4j.helpers.collection.PrefetchingIterator.hasNext(PrefetchingIterator.java:46) ~[neo4j-kernel-2.3.1.jar:2.3.1,dd67f90]
    at org.neo4j.collection.primitive.PrimitiveLongCollections$PrimitiveLongConcatingIterator.fetchNext(PrimitiveLongCollections.java:196) ~[neo4j-primitive-collections-2.3.1.jar:2.3.1]
    at org.neo4j.collection.primitive.PrimitiveLongCollections$PrimitiveLongBaseIterator.hasNext(PrimitiveLongCollections.java:56) ~[neo4j-primitive-collections-2.3.1.jar:2.3.1]
    at org.neo4j.collection.primitive.PrimitiveLongCollections$14.hasNext(PrimitiveLongCollections.java:740) ~[neo4j-primitive-collections-2.3.1.jar:2.3.1]
    at org.neo4j.helpers.collection.ResourceClosingIterator.hasNext(ResourceClosingIterator.java:61) ~[neo4j-kernel-2.3.1.jar:2.3.1,dd67f90]
    at com.graphaware.tx.executor.input.TransactionalInput.fetchNextOrNull(TransactionalInput.java:72) ~[classes/:na]
    at org.neo4j.helpers.collection.PrefetchingIterator.peek(PrefetchingIterator.java:60) ~[neo4j-kernel-2.3.1.jar:2.3.1,dd67f90]
    at org.neo4j.helpers.collection.PrefetchingIterator.hasNext(PrefetchingIterator.java:46) ~[neo4j-kernel-2.3.1.jar:2.3.1,dd67f90]
    at com.graphaware.tx.executor.batch.IterableInputBatchTransactionExecutor$1.run(IterableInputBatchTransactionExecutor.java:85) ~[classes/:na]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
bachmanm commented 8 years ago

I'm not sure if this is a bug in Neo4j, but it definitely seems to be a change in behaviour. I'll write a test case and report it. In the meantime, the dirty workaround is to set the batchSize in AllNodesWithLabel to Integer.MAX_VALUE :(

ikwattro commented 8 years ago

Ha I will test. Thanks for checking.

On 25 November 2015 at 11:02, Michal Bachman notifications@github.com wrote:

I'm not sure if this is a bug in Neo4j, but it definitely seems to be a change in behaviour. I'll write a test case and report it. In the meantime, the dirty workaround is to set the batchSize in AllNodesWithLabel to Integer.MAX_VALUE :(

— Reply to this email directly or view it on GitHub https://github.com/graphaware/neo4j-framework/issues/37#issuecomment-159557299 .

Christophe Willemsen | Graph Aware Limited

Phone: +44 (0) 333 444 7274 | Mobile: +32 (0) 489 687 208 christophe@graphaware.com | @graph_aware | www.graphaware.com

bachmanm commented 8 years ago

see https://github.com/neo4j/neo4j/issues/6087