MarketEvaluation measures (information retrieval)
Company Profile

Evaluation measures (information retrieval)

Evaluation measures for an information retrieval (IR) system assess how well an index, search engine, or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms.

Background
Indexing and classification methods to assist with information retrieval have a long history dating back to the earliest libraries and collections. Systematic evaluation of their effectiveness began in earnest in the 1950s with the rapid expansion in research production across military, government and education and the introduction of computerised catalogues. At this time, there were a number of different indexing, classification and cataloguing systems in operation which were expensive to produce and it was unclear which was the most effective. Cyril Cleverdon, Librarian of the College of Aeronautics, Cranfield, England, began a series of experiments of print indexing and retrieval methods in what is known as the Cranfield paradigm, or Cranfield tests, which set the standard for IR evaluation measures for many years. Cleverdon developed a test called ‘known-item searching’ - to check whether an IR system returned the documents that were known to be relevant or correct for a given search. Cleverdon’s experiments established a number of key aspects required for IR evaluation: a test collection, a set of queries and a set of pre-determined relevant items which combined would determine precision and recall. Cleverdon's approach formed a blueprint for the successful Text Retrieval Conference series that began in 1992. == Applications ==
Applications
Evaluation of IR systems is central to the success of any search engine including internet search, website search, databases and library catalogues. Evaluations measures are used in studies of information behaviour, usability testing, business costs and efficiency assessments. Measuring the effectiveness of IR systems has been the main focus of IR research, based on test collections combined with evaluation measures. A number of academic conferences have been established that focus specifically on evaluation measures including the Text Retrieval Conference (TREC), Conference and Labs of the Evaluation Forum (CLEF) and NTCIR. == Online measures ==
Online measures
Online metrics are generally created from search logs. The metrics are often used to determine the success of an A/B test. Session abandonment rate Session abandonment rate is a ratio of search sessions which do not result in a click. Click-through rate Click-through rate (CTR) is the ratio of users who click on a specific link to the number of total users who view a page, email, or advertisement. It is commonly used to measure the success of an online advertising campaign for a particular website as well as the effectiveness of email campaigns. Session success rate Session success rate measures the ratio of user sessions that lead to a success. Defining "success" is often dependent on context, but for search a successful result is often measured using dwell time as a primary factor along with secondary user interaction, for instance, the user copying the result URL is considered a successful result, as is copy/pasting from the snippet. Zero result rate Zero result rate (ZRR) is the ratio of Search Engine Results Pages (SERPs) which returned with zero results. The metric either indicates a recall issue, or that the information being searched for is not in the index. == Offline metrics ==
Offline metrics
Offline metrics are generally created from relevance judgment sessions where the judges score the quality of the search results. Both binary (relevant/non-relevant) and multi-level (e.g., relevance from 0 to 5) scales can be used to score each document returned in response to a query. In practice, queries may be ill-posed, and there may be different shades of relevance. For instance, there is ambiguity in the query "mars": the judge does not know if the user is searching for the planet Mars, the Mars chocolate bar, the singer Bruno Mars, or the Roman deity Mars. Precision Precision is the fraction of the documents retrieved that are relevant to the user's information need. : \mbox{precision}=\frac In binary classification, precision is analogous to positive predictive value. Precision takes all retrieved documents into account. It can also be evaluated considering only the topmost results returned by the system using Precision@k. Note that the meaning and usage of "precision" in the field of information retrieval differs from the definition of accuracy and precision within other branches of science and statistics. Recall Recall is the fraction of the documents that are relevant to the query that are successfully retrieved. :\mbox{recall}=\frac In binary classification, recall is often called sensitivity. So it can be looked at as the probability that a relevant document is retrieved by the query. It is trivial to achieve recall of 100% by returning all documents in response to any query. Therefore, recall alone is not enough but one needs to measure the number of non-relevant documents also, for example by computing the precision. Fall-out The proportion of non-relevant documents that are retrieved, out of all non-relevant documents available: : \mbox{fall-out}=\frac In binary classification, fall-out is the opposite of specificity and is equal to (1-\mbox{specificity}). It can be looked at as the probability that a non-relevant document is retrieved by the query. It is trivial to achieve fall-out of 0% by returning zero documents in response to any query. F-score / F-measure The weighted harmonic mean of precision and recall, the traditional F-measure or balanced F-score is: :F = \frac{2 \cdot \mathrm{precision} \cdot \mathrm{recall}}{(\mathrm{precision} + \mathrm{recall})} This is also known as the F_1 measure, because recall and precision are evenly weighted. The general formula for non-negative real \beta is: :F_\beta = \frac{(1 + \beta^2) \cdot (\mathrm{precision} \cdot \mathrm{recall})}{(\beta^2 \cdot \mathrm{precision} + \mathrm{recall})}\, Two other commonly used F measures are the F_{2} measure, which weights recall twice as much as precision, and the F_{0.5} measure, which weights precision twice as much as recall. The F-measure was derived by van Rijsbergen (1979) so that F_\beta "measures the effectiveness of retrieval with respect to a user who attaches \beta times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measure E = 1 - \frac{1}{\frac{\alpha}{P} + \frac{1-\alpha}{R}}. Their relationship is: :F_\beta = 1 - E where \alpha=\frac{1}{1 + \beta^2} Since F-measure combines information from both precision and recall it is a way to represent overall performance without presenting two numbers. Average precision Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked sequence of documents, one can plot a precision-recall curve, plotting precision p(r) as a function of recall r. Average precision computes the average value of p(r) over the interval from r=0 to r=1: :\operatorname{AveP} = \int_0^1 p(r)dr That is the area under the precision-recall curve. This integral is in practice replaced with a finite sum over every position in the ranked sequence of documents: :\operatorname{AveP} = \sum_{k=1}^n P(k) \Delta r(k) where k is the rank in the sequence of retrieved documents, n is the number of retrieved documents, P(k) is the precision at cut-off k in the list, and \Delta r(k) is the change in recall from items k-1 to k. Note that the average is over relevant documents in top-k retrieved documents and the relevant documents not retrieved get a precision score of zero. Some authors choose to interpolate the p(r) function to reduce the impact of "wiggles" in the curve. For example, the PASCAL Visual Object Classes challenge (a benchmark for computer vision object detection) until 2010 computed the average precision by averaging the precision over a set of evenly spaced recall levels {0, 0.1, 0.2, ... 1.0}: The minimum achievable AveP for a given classification task is given by: \frac{1}{n_{pos}}\sum_{k=1}^{n_{pos}}\frac{k}{k+n_{neg}} Precision at k For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. Empirically, this measure is often highly correlated to mean average precision. • GMAP - geometric mean of (per-topic) average precision • Hit Rate Visualization Visualizations of information retrieval performance include: • Graphs which chart precision on one axis and recall on the other • Histograms of average precision over various topics • Receiver operating characteristic (ROC curve) • Confusion matrix == Non-relevance measures ==
Non-relevance measures
Queries per time Measuring how many queries are performed on the search system per (month/day/hour/minute/sec) tracks the utilization of the search system. It can be used for diagnostics to indicate an unexpected spike in queries, or simply as a baseline when comparing with other metrics, like query latency. For example, a spike in query traffic, may be used to explain a spike in query latency. ==See also==
tickerdossier.comtickerdossier.substack.com