Often times, reading Explanations (i.e. the breakdown of scores for a particular query and result, say via Solr's &debugQuery) is a pretty cryptic and hard to do undertaking. I often say people suffer from "explain blindness" from staring at explanation results for too long. We could add a layer of explanation helpers above the core Explain functionality that help people understand better what is going on. The goal is to give a higher level of tools to people who aren't necessarily well versed in all the underpinnings of Lucene's scoring mechanisms but still want information about why something didn't match
For instance (brainstorming some things that might be doable):
Explain Diff Tool – Given an 1 or more explanations, quickly highlight what the key things are that differentiate the results (i.e. fieldNorm is higher, etc.)
Given a query and any document, give a more friendly reason why it ranks lower than others without the need to have to parse through all the pieces of the score, for instance, could you simply say something like, programatically that is, this document scored lower compared to your top 10 b/c it had no values in the foo Field.
Could even maybe return codes for these reasons which could then be hooked into actual user messages.
I don't have anything concrete patch-wise here, but am putting this up as a way to capture the idea and potentially spur others to think about it.
Migrated from LUCENE-3118 by Grant Ingersoll (@gsingers), updated May 20 2011
Linked issues:
4087 captures a lot of ideas about making explanations easier to consume/use in client apps ... i think a lot of the ideas here are dependent on some of the ideas there.
Often times, reading Explanations (i.e. the breakdown of scores for a particular query and result, say via Solr's &debugQuery) is a pretty cryptic and hard to do undertaking. I often say people suffer from "explain blindness" from staring at explanation results for too long. We could add a layer of explanation helpers above the core Explain functionality that help people understand better what is going on. The goal is to give a higher level of tools to people who aren't necessarily well versed in all the underpinnings of Lucene's scoring mechanisms but still want information about why something didn't match
For instance (brainstorming some things that might be doable):
I don't have anything concrete patch-wise here, but am putting this up as a way to capture the idea and potentially spur others to think about it.
Migrated from LUCENE-3118 by Grant Ingersoll (@gsingers), updated May 20 2011 Linked issues:
4087