Closed asfimport closed 9 years ago
Adrien Grand (@jpountz) (migrated from JIRA)
I like the idea. I'm curious if you already have concrete ideas for the match costs of our existing queries? Maybe it should not only measure the cost of the operation but also how likely it is to match? This would make sloppy phrases more "costly" since they are more lenient about positions and thus more likely to match.
Robert Muir (@rmuir) (migrated from JIRA)
I'm curious if you already have concrete ideas for the match costs of our existing queries?
See above in the description. we know the average number of positions per doc (totalTermFreq/docFreq) and so on. So we can compute the amortized cost of reading one position, and its easy from there.
Maybe it should not only measure the cost of the operation but also how likely it is to match?
I don't agree. You can already get this with Scorer.getApproximation().cost()/Scorer.cost().
Paul Elschot (migrated from JIRA)
From the javadocs of DocIdSetIterator.cost(): This is generally an upper bound of the number of documents this iterator might match, but may be a rough heuristic, hardcoded value, or otherwise completely inaccurate.
Perhaps this cost method can be renamed to for example expectedMaxMatchingDocs() with these javadocs: This an expected upper bound of the number of documents this iterator might match.
Would it make sense to put matchCost() at DocIdSetIterator?
Adrien Grand (@jpountz) (migrated from JIRA)
Perhaps this cost method can be renamed to for example expectedMaxMatchingDocs() with these javadocs: This an expected upper bound of the number of documents this iterator might match.
Personally I don't dislike "cost". Even if it does not carry very well the meaning of what it measures, it does a pretty good job at carrying what to do with the result of this method: if you have several iterators, you want to consume the least-costly ones first.
Would it make sense to put matchCost() at DocIdSetIterator?
The DocIdSetIterator abstraction does not have the concept of "matching", only TwoPhaseIterator has it, so I think it would be awkward to have it on DocIdSetIterator? TwoPhaseIterator feels like a more appropriate place to have this method.
Paul Elschot (migrated from JIRA)
As to TwoPhaseIterator or DocIdSetIterator, I think this boils down to whether the leading iterator in ConjunctionDISI should be chosen using the expected number of matching docs only, or also using the totalTermFreq's somehow. This is for more complex queries, for example a conjunction with at least one phrase or SpanNearQuery.
But for the more complex queries two phase approximation is already in place, so having matchCost() only in the two phase code could be enough even for these queries.
Paul Elschot (migrated from JIRA)
Patch of 12 Oct 2015, starting matchCost for ExactPhraseScorer only, the rest does not compile because it still needs matchDoc. (A little bit too much whitespace was removed by the editor, please ignore the noise.)
Is this the direction to take?
Adrien Grand (@jpountz) (migrated from JIRA)
I think it would make more sense to sum up totalTermFreq/docFreq
for each term instead of totalTermFreq/conjunctionDISI.cost()
, so that we get the average number of positions per document? But otherwise I think you got the intention right. Something else to be careful with is that TermStatistics.totalTermFreq()
may return -1, so we need a fallback for that case. Maybe we could just assume 1 position per document?
A related question is what definition we should give to matchCost()
. The patch does not have the issue yet since it only deals with phrase queries, but eventually we should be able to compare the cost of eg. a phrase query against a doc values range query even though they perform very different computations. Maybe the javadocs of matchCost could suggest a scale of costs of operations that implementors of matchCost() could use in order to compute the cost of matching the two-phase iterator. It could be something like 1 for nextDoc(), nextPosition(), comparisons and basic arithmetic operations and eg. 10 for advance()?
Robert Muir (@rmuir) (migrated from JIRA)
As to TwoPhaseIterator or DocIdSetIterator, I think this boils down to whether the leading iterator in ConjunctionDISI should be chosen using the expected number of matching docs only, or also using the totalTermFreq's somehow. This is for more complex queries, for example a conjunction with at least one phrase or SpanNearQuery.
But for the more complex queries two phase approximation is already in place, so having matchCost() only in the two phase code could be enough even for these queries.
Yes, to keep things simple, I imagined this api would just be the cost of calling matches()
itself so I think the two phase API is the correct place to put it (like in your patch).
We already have a cost()
api for DISI for doing things like conjunctions (yes its purely based on density and maybe that is imperfect) but I think we should try to narrow the scope of this issue to just the cost of the matches()
operation, which can vary wildly depending on query type or document size.
What adrien says about "likelyhood of match" is also interesting but I think we want to defer that too. To me that is just a matter of having more accurate cost()
and it may not be easy or feasible to improve...
Paul Elschot (migrated from JIRA)
it would make more sense to sum up totalTermFreq/docFreq for each term
I'll change that and change the matchCost() method to return a float instead of a long.
TermStatistics.totalTermFreq() may return -1
I'll add a check for that.
what definition we should give to matchCost()
I'd like to have it reflect an avarage cost to process a single document, once the two phase iterator is at the document. That would exclude the cost for next() and advance(), which would be better in the DISI.cost() method for now.
How much of the cost of matches() should be in there I don't know, we'll see. NearSpans also does work after matches() returns true.
And the likelyhood of match is the probability that matches() returns true...
Adrien Grand (@jpountz) (migrated from JIRA)
change the matchCost() method to return a float instead of a long
I liked having it as a long, like DISI.cost(). Maybe we could just round?
I'd like to have it reflect an avarage cost to process a single document, once the two phase iterator is at the document. That would exclude the cost for next() and advance(), which would be better in the DISI.cost() method for now.
Indeed this is what it should do! Sorry I introduced some confusion, the reason why I brought these methods is ReqExclScorer, whose TwoPhaseIterator calls DocIdSetIterator.advance() on the excluded iterator in oder to validate a match. So we need to decide how costly calling advance() is.
Paul Elschot (migrated from JIRA)
Patch of 13 Oct 2015. No spans yet. Left matchDoc() returning float because in many cases the avarage number of positions in a matching document will be close to 1. Quite a few nocommits at matchDoc implementations throwing an Error("not yet implemented")
This includes a first attempt at sorting the DISI's in ConjunctionDISI.
To my surprise, quite a few tests pass, I have not yet tried all of them.
Adrien Grand (@jpountz) (migrated from JIRA)
The change in ConjunctionDISI does not look right to me: we should keep sorting the iterators based on DISI.cost, and only use TwoPhaseIterator.matchCost
to sort TwoPhaseConjunctionDISI.twoPhaseIterators
.
I'm also unhappy about adding a method to TermStatistics, this class should remain as simple as possible. Can we make it private to PhraseWeight?
Paul Elschot (migrated from JIRA)
... unhappy about adding a method to TermStatistics, this class should remain as simple as possible. Can we make it private to PhraseWeight?
Why should TermStatistics remain as simple as possible? Having a method that returns an expected value in a ...Statistics class looks just right to me.
I initially had the code in PhraseWeight, but there many getter methods from TermStatistics were used, so I moved the method to TermStatistics and used the return value as the match cost for a single term in PhraseWeight.
I think we will need the same thing for spans (multiplied with a factor 4 or so), and in that case the method will have to be public, because the spans are in a different package. Can we reconsider moving the method until the spans are done here?
... only use TwoPhaseIterator.matchCost to sort TwoPhaseConjunctionDISI.twoPhaseIterators.
I missed that, I'll introduce it.
Paul Elschot (migrated from JIRA)
Second patch of 13 Oct 2015: Use matchCost to sort twoPhaseIterators. Add matchCost implementations in test code. Rename method expPositionsPerDoc() to expTermFreqInMatchingDoc().
Paul Elschot (migrated from JIRA)
matchCost is still not implemented for Spans (4 nocommits left), and now some test cases using Spans actually fail.
Paul Elschot (migrated from JIRA)
We could take into account the different costs of advance() and nextDoc(), but at another issue. With cost() as an estimation of the number of matching documents, as it is now: for conjunctions that could become: 2 * (minimum cost()) * (cost of advance), and for disjunctions: (total cost()) * (cost of nextDoc).
ReqExclScorer could use the cost of advance in its matchCost already here, but I have no idea which value to use.
Adrien Grand (@jpountz) (migrated from JIRA)
TermStatistics is a class that we need to maintain backward compatibility for since it's not experimental/internal. So we shouldn't put more methods in there that we only need for implementation details of PhraseQuery/SpanNearQuery. I would rather duplicate the logic. In addition, the current implementation of this method is trappy as it assumes that the average term freq is 1 when totalTermFreq is not available. While this might be ok for the matchCost computation of phrase queries, it might not be for other use-cases.
The changes in ConjunctionDISI look good to me now, thanks.
+ if (w.twoPhaseView != null) {
+ matchCost += w.twoPhaseView.matchCost();
+ } else {
+ assert w.iterator instanceof TermScorer; // zero match cost.
+ }
w.twoPhaseView
can be null on any scorer that does not expose an approximation. So it can be not only a TermScorer, but also a conjunction/disjunction of term scorers or even a custom query.
Paul Elschot (migrated from JIRA)
TermStatistics.java has a `@lucene
.experimental` javadoc in trunk.
I'll remove the assert w.iterator instanceof TermScorer
, I put it there to remind me to check what to do in other cases.
Paul Elschot (migrated from JIRA)
Patch of 14 October 2015. No more NOCOMMITS, existing tests pass. Still no tests to verify that matchCost() is used correctly.
Some FIXME's for the cost values used. Improve javadoc for TermStatistics.expTermFreqInMatchingDoc. In AssertingTwoPhaseView.matchCost() the cost should be non negative. Small javadoc correction in TwoPhaseIterator.
Robert Muir (@rmuir) (migrated from JIRA)
I agree with Adrien here. TermStatistics and CollectionStatistics are what feed the scoring systems (and already hairy enough as is), so we should keep any of this optimization-related stuff out of them. They were added to allow IndexSearcher to support distributed search.
Paul Elschot (migrated from JIRA)
2nd patch of 15 October 2015.
This adds Span.positionsCost() as the basic matchCost to be used for Spans. This method has a NOCOMMIT in Spans.java: throw UOE or abstract in Spans? I'd prefer to throw an UOE but an abstract method is easier to find the places where positionsCost() really needs to be implemented. For now I left it abstract and have some implementations throw UOE.
Other changes to the previous patch: Use TermStatistics from trunk. Move expTermFreqInMatchingDoc() from TermStatistics into PhraseWeight and a copy into TermSpans. Simplified matchCost() implementations.
Existing tests pass.
This will need improvements, and I hope it works decently for simple cases like a conjunction over a phrase and a SpanNear.
Paul Elschot (migrated from JIRA)
Some TwoPhaseIterators in Solr will need to have matchCost() added with the latest patch. I am not familiar enough with Solr code for that.
Paul Elschot (migrated from JIRA)
Patch of 18 Oct 2015.
Calculate the matchCost per LeafReaderContext, because the sorting by matchCost is done at leaf level.
In the facet, join and spatial modules, add matchCost implementations returning 0, and with a CHECKME comment.
Fixed a bug in the earlier added added sort comparator in the TwoPhaseConjunctionDISI constructor, it was comparing the same object.
Dropped the NOCOMMIT for Spans.positionsCost(), pefer this to be an abstract method.
In AssertingSpans the positionCost should be positive.
Add some toString() implementations for inline subclasses of TwoPhaseIterator to ease debugging.
Paul Elschot (migrated from JIRA)
2nd patch of 18 Oct 2015. More improvements:
David Smiley (@dsmiley) (migrated from JIRA)
This is neat. Couple things...
explain
could possibly display the matchCost? It'd be nice to troubleshoot/inspect for diagnostics somehow. Not critical, of course.Paul Elschot (migrated from JIRA)
I left the matchCosts that I could not easily determine at zero and added a CHECKME. This is more an indication that refinement is possible.
Sorting subscorers/subspans by cost and matchCost is probably better than relying on any given order. Anyway I don't expect the impact of matchCost on performance be more than 4-8% except maybe for really complex queries.
Showing the matchCost in explain will be tricky because it is computed by LeafReaderContext, i.e. by segment.
The matchCost is not yet used for the second phase in disjunctions. Yet another priority queue might be needed for that, so I'd prefer to delay that to another issue.
Paul Elschot (migrated from JIRA)
Another thing to be determined is this the relative cost of span queries vs phrase queries. The code for that is in SpanTermQuery here:
/** A guess of
* the relative cost of dealing with the term positions
* when using a SpanNearQuery instead of a PhraseQuery.
*/
private final float PHRASE_TO_SPAN_TERM_POSITIONS_COST = 4.0f;
This is a guess because it is only based on my recollection of a few years ago that the performance of PhraseQuery was about 4 times better than an ordered SpanNear. In the long term it is probably better to make this a configurable parameter.
Paul Elschot (migrated from JIRA)
Talking about priority queues, there is also this one: #7512.
Adrien Grand (@jpountz) (migrated from JIRA)
It will be difficult for many of the 2-phase implementations to calculate a matchCost – particularly the ones not based on the number of term positions. What to do?
Agreed: we need to come with a very simple definition of matchCost that could be applied regardless of how matches() is implemented. I think we have two options:
Runtimes in nanoseconds could easily vary depending on hardware, JVM version, etc. so I think the 2nd option is more practical. For instance:
This is simplistic but I think it would do the job and keep the implementation simple. For instance, a doc values range query would always be confirmed before a geo-distance query.
But I see that the latest BooleanQuery.Builder is not stable due to use of HashSet / MultiSet versus LinkedHashSet which would be stable. What do you think Adrien Grand?
Actually it is: those sets and multisets are only used for equals/hashcode. The creation of scorers is still based on the list of clauses, which maintains the order from the builder.
Showing the matchCost in explain will be tricky because it is computed by LeafReaderContext, i.e. by segment.
+1 to not do it
The matchCost is not yet used for the second phase in disjunctions. Yet another priority queue might be needed for that, so I'd prefer to delay that to another issue.
Feel free to delay, I plan to explore this in #7873.
David Smiley (@dsmiley) (migrated from JIRA)
RE BooleanQuery stable ordering: thanks for correcting me; I'm very glad it's stable. At least this gives the user some control.
I think we may need to percolate the matchCost concept further into other APIs – namely ValueSource/FunctionValues. This way the 2-phase iterator can enclose one of them to fetch the matchCost of it when composing it's aggregate matchCost.
And I question if a numeric DV lookup or reading a posting is equivalent to one mathematical operation since those things involve some code behind them that aren't cheap one-liners that a math operation are. Nonetheless, I get the concept you are suggesting.
I kind of like the time based approach you suggested better but what's needed is some automation to aid in establishing a baseline such that someone on their own machine can get an approximation slower/faster factor multiplier compared to some baseline server. Like what if there was an ant target that ran some test based on Wikipedia data that established that the matchCost for a some PhraseQuery on the machine it's run on is ____ nanoseconds. Then the output also displays some hard-coded value that we got when running this on some baseline server. Dividing the two yields a relative difference between the local machine and the baseline server. Then when working on a custom query, I could locally temporarily either modify the timing test to test my new query (perhaps plucking the geo data in Wikipedia out to a lat-lon spatial field) or timing it in my own way. Then I apply the multiplier to determine which number to hard-code into the query going into Lucene. Make sense?
There is another aspect of the cost beyond a per-postings iteration cost. The cost of reading the N+1 position is generally going to be much cheaper than reading the very first position since the first once possibly involves a seek. If only postings based queries are being compared via matchCost, this is a wash since all of them have this cost but it'd be different for a doc-values based query. Although perhaps it's a wash there too – assume one disk seek? Worst case of course.
Paul Elschot (migrated from JIRA)
an average number of operations that need to be performed in matches(), so that you would add +1 every time you do a comparison, arithmetic operation, consume a PostingsEnum, etc.
That sounds doable. The Lucene50PostingsReader takes about such 7 operations for a nextPosition() call in case it does not seek and/or refill its buffer. I assume that would be the cost of consuming a PostingsEnum.
The cost of reading the N+1 position is generally going to be much cheaper than reading the very first position since the first once possibly involves a seek.
To take that into account an estimation of the seek/refill cost per term could be added in once per document.
we may need to percolate the matchCost concept further into other APIs – namely ValueSource/FunctionValues
Could that be done at another issue?
David Smiley (@dsmiley) (migrated from JIRA)
To take that into account an estimation of the seek/refill cost per term could be added in once per document.
Right; I just wanted to point this out.
we may need to percolate the matchCost concept further into other APIs – namely ValueSource/FunctionValues
bq. Could that be done at another issue?
Yes, of course.
Paul Elschot (migrated from JIRA)
Patch of 27 October 2015. This
Since 7 is rather low and the expected number of positions per document containing the term is just above 1 in many cases, I left matchCost() returning a float.
Adrien Grand (@jpountz) (migrated from JIRA)
Some suggestions:
Paul Elschot (migrated from JIRA)
I basically agree to all of these.
... move the utility methods to compute costs of phrases from TwoPhaseIterator into PhraseWeight/SpanNearQuery. I don't like leaking implementation details of specific TwoPhaseIterators into TwoPhaseIterator.
and make them (package) private I assume? The only disadvantage of that is that some duplication of these methods is needed in the spans package.
The easiest way to avoid such duplication would be when Spans move from o.a.l.search.spans to o.a.l.search. Iirc there was some talk of that not so long ago (Alan's plans for spans iirc), so how about waiting for that, possibly at a separate issue?
It will take a while (at least a week) before I can continue with this. Please feel free to take it on.
Paul Elschot (migrated from JIRA)
Patch of 2 Nov 2015. This addresses all the above concerns. It also precomputes positionsCost in SpanOrQuery, weighted by the cost() in the same way as matchCost.
Adrien Grand (@jpountz) (migrated from JIRA)
Otherwise the change looks good to me, I like the cost definition for conjunctions/disjunctions/phrases and we can tackle other queries in follow-up issues, but I think this is already a great start and will help execute slow queries more efficiently!
Paul Elschot (migrated from JIRA)
Patch of 7 Nov 2015. This addresses all concerns of 3 days ago. termPositionsCost moved from TwoPhaseIterator to PhraseQuery, I left a copy in SpanTermQuery because that is where it is used.
Perhaps the result of ConjunctionSpans.asTwoPhaseIterator() should look more like TwoPhaseConjunctionDISI, at the moment I cannot get my head around this.
Paul Elschot (migrated from JIRA)
I went over the patch and the earlier posts to get an overview of open points, TODO's, etc. There are quite a lot of them, so we'll need to prioritize and/or move/defer to other issues.
lucene core:
ConjunctionDISI matchCost(): give the lower matchCosts a higher weight
PhraseQuery: TERM_POSNS_SEEK_OPS_PER_DOC = 128, guess PHRASE_TO_SPAN_TERM_POSITIONS_COST = 4, guess
TwoPhaseIterator: Return value of matchCost(): long instead of float?
RandomAccessWeight matchCost(): 10, use cost of matchingDocs.get()
ReqExclScorer matchCost(): also use cost of exclApproximation.advance()
SpanTermQuery: termPositionsCost is copy of PhraseQuery termPositionsCost
SpanOrQuery: add cost of balancing priority queues for positions?
facet module (defer to other issue):
DoubleRange matchCost(): 100, use cost of range.accept() LongRange matchCost(): 100, use cost of range.accept()
join module (defer to other issue ?):
GlobalOrdinals(WithScore)Query matchCost(): 100, use cost of values.getOrd() and foundOrds.get() GlobalOrdinals(WithScore)Query 2nd matchCost(): 100, use cost of values.getOrd() and foundOrds.get()
queries module (defer to other issue):
ValueSourceScorer matchCost(): 100, use cost of ValueSourceScorer.this.matches()ValueSourceScorer matchCost(): 100, use cost of
spatial module (defer to other issue)::
CompositeVerifyQuery matchCost(): 100, use cost of predFuncValues.boolVal() IntersectsRPTVerifyQuery matchCost(): 100, use cost of exactIterator.advance() and predFuncValues.boolVal()
test-framework module:
RandomApproximationQuery randomMatchCost: between 0 and 200: ok?
solr core:
Filter matchCost(): 10, use cost of bits.get() ?
At this issue:
Performance test based on Wikipedia to estimate guessed values.
tests for matchCost() ?
Check result of ConjunctionSpans.asTwoPhaseIterator: more similar to TwoPhaseConjunctionDISI ?
For other issues:
At #7929 remove copy of SpanTermQuery.termPositionsCost().
SpanOrQuery is getting too big, split off DisjunctionSpans.
cost() implementation of conjunctions and disjunctions could improve: add use of indepence assumption. The result of cost() is used here for weighting, so it should be good as possible.
Adrien Grand (@jpountz) (migrated from JIRA)
ConjunctionDISI matchCost(): give the lower matchCosts a higher weight
We could use the likelyness of a match, which should be given by Scorer.asTwoPhaseApproximation().approximation().cost()/Scorer.cost() even though I suspect that most implementations have no way to figure it out (eg. numeric doc values ranges). But I think we should defer, it's fine to assume worst case like the patch does today.
TwoPhaseIterator: Return value of matchCost(): long instead of float?
I would be ok with both, but given that matchCost is documented as "an expected cost in number of simple operations", maybe a long makes more sense? It also has the benefit of avoiding issues with ±0, Nans, infinities, etc.
Performance test based on Wikipedia to estimate guessed values.
I think this change is very hard to benchmark... I'm personally fine with moving on here without performance benchmarks.
For other ones that I did not reply to, I suggest that we defer them: I don't think they should hold this change.
Paul Elschot (migrated from JIRA)
As to long/float: the expected outcome of rolling a dice is not a whole number, and as it happens, that has some similarities with the situation here.
Paul Elschot (migrated from JIRA)
I have opened #7952 for the independence assumption for DISI.cost().
Adrien Grand (@jpountz) (migrated from JIRA)
I'm +1 on the patch. I'll do some more testing locally in the next days and commit it.
Adrien Grand (@jpountz) (migrated from JIRA)
Hmm, I'm getting failures with ComplexPhraseQuery:
[junit4] Suite: org.apache.lucene.queryparser.complexPhrase.TestComplexPhraseQuery
[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestComplexPhraseQuery -Dtests.method=testComplexPhrases -Dtests.seed=E7F242A6F40525AB -Dtests.slow=true -Dtests.locale=in_ID -Dtests.timezone=Africa/Banjul -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.18s J2 | TestComplexPhraseQuery.testComplexPhrases <<<
[junit4] > Throwable #1: java.lang.AssertionError
[junit4] > at __randomizedtesting.SeedInfo.seed([E7F242A6F40525AB:8259F91767B2FFA3]:0)
[junit4] > at org.apache.lucene.search.spans.SpanOrQuery$SpanOrWeight$1.positionsCost(SpanOrQuery.java:261)
[junit4] > at org.apache.lucene.search.spans.ScoringWrapperSpans.positionsCost(ScoringWrapperSpans.java:88)
[junit4] > at org.apache.lucene.search.spans.FilterSpans$2.matchCost(FilterSpans.java:167)
[junit4] > at org.apache.lucene.search.ConjunctionDISI$TwoPhaseConjunctionDISI.<init>(ConjunctionDISI.java:186)
[junit4] > at org.apache.lucene.search.ConjunctionDISI$TwoPhaseConjunctionDISI.<init>(ConjunctionDISI.java:164)
[junit4] > at org.apache.lucene.search.ConjunctionDISI$TwoPhase.<init>(ConjunctionDISI.java:227)
[junit4] > at org.apache.lucene.search.ConjunctionDISI$TwoPhase.<init>(ConjunctionDISI.java:221)
[junit4] > at org.apache.lucene.search.ConjunctionDISI.intersect(ConjunctionDISI.java:50)
[junit4] > at org.apache.lucene.search.spans.ConjunctionSpans.<init>(ConjunctionSpans.java:43)
[junit4] > at org.apache.lucene.search.spans.NearSpansOrdered.<init>(NearSpansOrdered.java:56)
[junit4] > at org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:223)
[junit4] > at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134)
[junit4] > at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
[junit4] > at org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
[junit4] > at org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
[junit4] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:667)
[junit4] > at org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
[junit4] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:474)
[junit4] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:593)
[junit4] > at org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:451)
[junit4] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:462)
[junit4] > at org.apache.lucene.queryparser.complexPhrase.TestComplexPhraseQuery.checkMatches(TestComplexPhraseQuery.java:116)
[junit4] > at org.apache.lucene.queryparser.complexPhrase.TestComplexPhraseQuery.testComplexPhrases(TestComplexPhraseQuery.java:58)
[junit4] > at java.lang.Thread.run(Thread.java:745)
[junit4] 2> NOTE: test params are: codec=Asserting(Lucene60): {role=PostingsFormat(name=Memory doPackFST= false), name=PostingsFormat(name=LuceneFixedGap), id=PostingsFormat(name=LuceneFixedGap)}, docValues:{}, sim=RandomSimilarityProvider(queryNorm=true,coord=no): {role=DFR I(n)LZ(0.3), name=DFR I(n)3(800.0), id=DFR I(n)3(800.0)}, locale=in_ID, timezone=Africa/Banjul
[junit4] 2> NOTE: Linux 3.13.0-68-generic amd64/Oracle Corporation 1.8.0_66-ea (64-bit)/cpus=8,threads=1,free=187449736,total=253231104
[junit4] 2> NOTE: All tests run in this JVM: [TestNumericRangeQueryBuilder, TestExtendableQueryParser, TestSpanQueryParser, TestExtensions, TestComplexPhraseQuery]
[junit4] Completed [11/27] on J2 in 0.50s, 5 tests, 1 failure <<< FAILURES!
I haven't looked into it yet.
Paul Elschot (migrated from JIRA)
The test failure reproduces here. I'll take a look, thanks.
Paul Elschot (migrated from JIRA)
This failure disappeared after adding asTwoPhaseIterator() to ScoringWrapperSpans. I'll post a new patch later.
Paul Elschot (migrated from JIRA)
Patch of 12 Nov 2015. Adds ScoringWrapperSpans.asTwoPhaseIterator().
This missing method could be a bug in itself.
ASF subversion and git services (migrated from JIRA)
Commit 1714261 from @jpountz in branch 'dev/trunk' https://svn.apache.org/r1714261
LUCENE-6276: Added TwoPhaseIterator.matchCost().
ASF subversion and git services (migrated from JIRA)
Commit 1714266 from @jpountz in branch 'dev/branches/branch_5x' https://svn.apache.org/r1714266
LUCENE-6276: Added TwoPhaseIterator.matchCost().
Adrien Grand (@jpountz) (migrated from JIRA)
I just committed the changes. Thanks Paul!
We could add a method like TwoPhaseDISI.matchCost() defined as something like estimate of nanoseconds or similar.
ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array so that cheaper ones are called first. Today it has no idea if one scorer is a simple phrase scorer on a short field vs another that might do some geo calculation or more expensive stuff.
PhraseScorers could implement this based on index statistics (e.g. totalTermFreq/maxDoc)
Migrated from LUCENE-6276 by Robert Muir (@rmuir), resolved Nov 13 2015 Attachments: LUCENE-6276.patch (versions: 8), LUCENE-6276-ExactPhraseOnly.patch, LUCENE-6276-NoSpans.patch, LUCENE-6276-NoSpans2.patch