This project includes tools to help evaluate relevance ranking. This code has been tested with Solr 4.x, 7.x and 8.x, and ES 6.x and 7.x.
This project is not intended to compete with existing relevance evaluation tools, such as Splainer, Quepid, Rated Ranking Evaluator, or Luigi's Box. Rather, this project was developed for use cases not currently covered by open source software packages. The author encourages collaboration among these projects.
NOTE: This project is under construction and is quite dynamic.
There will be breaking changes before the first major release.
While the name of this project may change in the future, we selected quaerite -- Latin imperative "seek", root of English "query" -- to allude not only to the challenges of creating queries, but also to the challenges of tuning search engines. One may spend a not insignificant amount of time tuning countless parameters. In the end, we hope that invenietis with slightly less effort than without this project. For the pronunciation, see this link.
Similarities and Differences between the Genetic Algorithm (GA) in Quaerite and Learning to Rank
In the research literature, the application of a GA or Genetic Programming (GP) is one method for learning to rank (see, e.g. Andrew Trotman on GP).
However, for integrators and developers who work in the Lucene ecosystem, "Learning to Rank" (LTR) connotes a specific methodology/module initially added to Apache Solr by Bloomberg and then offered as a plugin for ElasticSearch by Doug Turnbull and colleagues at OpenSource Connections, Wikimedia Foundation and Snagajob Engineering. In the following, I use LTR to refer to this Lucene-ecosystem-specific module and methodology.
In no way do I see this implementation of GA as a competitor to LTR; rather, it is another tool that might help complement LTR and/or other tuning methodologies.
n
documents, LTR is typically applied to carry out more costly calculations on
this smaller subset of documents to re-rank the results based on the models built offline. The goal of
this implementation of GA (and the other tools in Quaerite) is to help tune the parameters used in the initial
search system's ranking, NOT as part of a secondary reranking.As of this writing, Quaerite
allows for experimentation with the following parameters:
bf
, bq
, qf
, pf
, pf2
, pf3
, ps
, ps2
, ps3
, q.op
(and mm
), solr url
(so that you can run experiments against
different cores and/or different versions of Solr),
customHandler
(so that you can compare different customized handlers), tie
.
For ES, specifically, parameters include: boost
, fuzziness
and multi_match_type
(e.g. best_fields
, most_fields
, cross_fields
and phrase
).
See the quaerite-examples
module and its README.
High priorities
bq
, bf
) as neededPlanned Releases
Copyright (c) 2019, The MITRE Corporation. All rights reserved.
Approved for Public Release; Distribution Unlimited. Case Number 18-3138-7.
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at