Reports on Search Quality Experiments with Lucene
This page is for Lucene users and developers to report on experiments of measuring or improving Lucene search quality.
Search Quality?
First question is how to define the search quality. While each new experiment reported herein may define different measures, few standard ones are
- MAP - Mean Average Precision.
- MRR - Mean Reciprocal Precision.
- P@n - Precision at n, where sometimes interesting n values are 1, 5, 10, and 20.
See also wikipedia/ir.
How to Measure?
In Lucene's contrib benchmark, the search quality package can be used for quality tests. The package comes with ready to use TREC evaluation and query parsing code, as well as submission reports creation for submitting to TREC, but is open for extension to any other evaluation data and queries.
The experiments
These are the experiments reported so far. Please add yours!