apache / lucene

Apache Lucene open-source search software
https://lucene.apache.org/
Apache License 2.0
2.63k stars 1.02k forks source link

minimizeHopcroft OOMEs on smallish (2096 states, finite) automaton [LUCENE-3363] #4436

Closed asfimport closed 12 years ago

asfimport commented 13 years ago

Not sure what's up w/ this... if you check out the blocktree branch (#4104) and comment out the @Ignore in TestTermsEnum2.testFiniteVersusInfinite then this should hit OOME: {[ant test-core -Dtestcase=TestTermsEnum2 -Dtestmethod=testFiniteVersusInfinite -Dtests.seed=-2577608857970454726:-2463580050179334504}}


Migrated from LUCENE-3363 by Michael McCandless (@mikemccand), resolved Oct 30 2011 Attachments: LUCENE-3363.patch

asfimport commented 13 years ago

Uwe Schindler (@uschindler) (migrated from JIRA)

I have no idea, too. I hope this is not caused by minimizeSchindler changes and is the orginal minimizeHopcroft. Have you tried to undo the relevant commit? It still looks strange, as there must be something totally weird going on. I am away from computer this weekend, can look into it on Monday.

asfimport commented 13 years ago

Robert Muir (@rmuir) (migrated from JIRA)

patch for the blocktree branch:

The issue with this automaton:

Originally, AutomatonQuery minimized the automaton in its ctor, but I'm not sure we should do this, the input automaton could be large and if someone wants to do this, they should do it themselves?

I think my original motivation was to try to fend off any adversaries (some crazy worst-case crap that would make the query slow)... but I think this is obselete.

The patch changes this to determinize() + removeDeadTransitions() + reduce(), the first 2 operations really being all we need, but reduce() might help speed up the intersection.

Note: RegExp already minimizes "incrementally" during its parsing, but this is one op at a time, so I think there is no problem here. I tested removing this too and replacing it with det + removeDead + reduce, but it slowed down regex parsing considerably, so I think we should continue to use minimize here.

Additionally, I optimized wildcardquery here to use the optimized concatenate() to avoid useless determinize() calls when the LHS is a string, before it was using the concatenate(List) method.

asfimport commented 13 years ago

Robert Muir (@rmuir) (migrated from JIRA)

by the way, for whatever reason the seed never OOM'ed for me, but: before:

junit-sequential:
    [junit] Testsuite: org.apache.lucene.index.TestTermsEnum2
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 211.474 sec
    [junit] 

after:

junit-sequential:
    [junit] Testsuite: org.apache.lucene.index.TestTermsEnum2
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.856 sec
    [junit] 
asfimport commented 13 years ago

Robert Muir (@rmuir) (migrated from JIRA)

here is the quote from the original hopcroft paper, explaining why this crazy test that uses lots of random unicode strings causes the blowup:

An algorithm is given for minimizing the number of states in a finite automaton or for determining if two finite automata are equivalent. The asymptotic running time of the algorithm is bounded by knlogn where k is some constant and n is the number of states. The constant k depends linearly on the size of the input alphabet.