Offline metrics are generally created from relevance judgment sessions where the judges score the quality of the search results. Both binary (relevant/non-relevant) and multi-level (e.g., relevance from 0 to 5) scales can be used to score each document returned in response to a query. In practice, queries may be
ill-posed, and there may be different shades of relevance. For instance, there is ambiguity in the query "mars": the judge does not know if the user is searching for the planet
Mars, the
Mars chocolate bar, the singer
Bruno Mars, or
the Roman deity Mars.
Precision Precision is the fraction of the documents retrieved that are
relevant to the user's information need. : \mbox{precision}=\frac In
binary classification, precision is analogous to
positive predictive value. Precision takes all retrieved documents into account. It can also be evaluated considering only the topmost results returned by the system using
Precision@k. Note that the meaning and usage of "precision" in the field of information retrieval differs from the definition of
accuracy and precision within other branches of science and
statistics.
Recall Recall is the fraction of the documents that are relevant to the query that are successfully retrieved. :\mbox{recall}=\frac In binary classification, recall is often called
sensitivity. So it can be looked at as
the probability that a relevant document is retrieved by the query. It is trivial to achieve recall of 100% by returning all documents in response to any query. Therefore, recall alone is not enough but one needs to measure the number of non-relevant documents also, for example by computing the precision.
Fall-out The proportion of non-relevant documents that are retrieved, out of all non-relevant documents available: : \mbox{fall-out}=\frac In binary classification, fall-out is the opposite of
specificity and is equal to (1-\mbox{specificity}). It can be looked at as
the probability that a non-relevant document is retrieved by the query. It is trivial to achieve fall-out of 0% by returning zero documents in response to any query.
F-score / F-measure The weighted
harmonic mean of precision and recall, the traditional F-measure or balanced F-score is: :F = \frac{2 \cdot \mathrm{precision} \cdot \mathrm{recall}}{(\mathrm{precision} + \mathrm{recall})} This is also known as the F_1 measure, because recall and precision are evenly weighted. The general formula for non-negative real \beta is: :F_\beta = \frac{(1 + \beta^2) \cdot (\mathrm{precision} \cdot \mathrm{recall})}{(\beta^2 \cdot \mathrm{precision} + \mathrm{recall})}\, Two other commonly used F measures are the F_{2} measure, which weights recall twice as much as precision, and the F_{0.5} measure, which weights precision twice as much as recall. The F-measure was derived by
van Rijsbergen (1979) so that F_\beta "measures the effectiveness of retrieval with respect to a user who attaches \beta times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measure E = 1 - \frac{1}{\frac{\alpha}{P} + \frac{1-\alpha}{R}}. Their relationship is: :F_\beta = 1 - E where \alpha=\frac{1}{1 + \beta^2} Since F-measure combines information from both precision and recall it is a way to represent overall performance without presenting two numbers.
Average precision Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked sequence of documents, one can plot a precision-recall curve, plotting precision p(r) as a function of recall r. Average precision computes the average value of p(r) over the interval from r=0 to r=1: :\operatorname{AveP} = \int_0^1 p(r)dr That is the area under the precision-recall curve. This integral is in practice replaced with a finite sum over every position in the ranked sequence of documents: :\operatorname{AveP} = \sum_{k=1}^n P(k) \Delta r(k) where k is the rank in the sequence of retrieved documents, n is the number of retrieved documents, P(k) is the precision at cut-off k in the list, and \Delta r(k) is the change in recall from items k-1 to k. Note that the average is over relevant documents in top-k retrieved documents and the relevant documents not retrieved get a precision score of zero. Some authors choose to interpolate the p(r) function to reduce the impact of "wiggles" in the curve. For example, the PASCAL Visual Object Classes challenge (a benchmark for computer vision object detection) until 2010 computed the average precision by averaging the precision over a set of evenly spaced recall levels {0, 0.1, 0.2, ... 1.0}: The minimum achievable AveP for a given classification task is given by: \frac{1}{n_{pos}}\sum_{k=1}^{n_{pos}}\frac{k}{k+n_{neg}}
Precision at k For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them.
Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. Empirically, this measure is often highly correlated to mean average precision. • GMAP - geometric mean of (per-topic) average precision • Hit Rate
Visualization Visualizations of information retrieval performance include: • Graphs which chart precision on one axis and recall on the other • Histograms of average precision over various topics •
Receiver operating characteristic (ROC curve) •
Confusion matrix == Non-relevance measures ==