Question answering is dependent on a good search
corpus; without documents containing the answer, there is little any question answering system can do. Larger collections generally mean better question answering performance, unless the question domain is orthogonal to the collection.
Data redundancy in massive collections, such as the web, means that nuggets of information are likely to be phrased in many different ways in differing contexts and documents, leading to two benefits: • If the right information appears in many forms, the question answering system needs to perform fewer complex NLP techniques to understand the text. • Correct answers can be filtered from
false positives because the system can rely on versions of the correct answer appearing more times in the corpus than incorrect ones. Some question answering systems rely heavily on
automated reasoning.
Open-domain In
information retrieval, an open-domain question answering system tries to return an answer in response to the user's question. The returned answer is in the form of short texts rather than a list of relevant documents. The system finds answers by using a combination of techniques from
computational linguistics,
information retrieval, and
knowledge representation. The system takes a
natural language question as an input rather than a set of keywords, for example: "When is the national day of China?" It then transforms this input sentence into a query in its
logical form. Accepting natural language questions makes the system more user-friendly, but harder to implement, as there are a variety of question types and the system will have to identify the correct one in order to give a sensible answer. Assigning a question type to the question is a crucial task; the entire answer extraction process relies on finding the correct question type and hence the correct answer type. Keyword
extraction is the first step in identifying the input question type. In some cases, words clearly indicate the question type, e.g., "Who", "Where", "When", or "How many"—these words might suggest to the system that the answers should be of type "Person", "Location", "Date", or "Number", respectively.
POS (part-of-speech) tagging and syntactic parsing techniques can also determine the answer type. In the example above, the subject is "Chinese National Day", the predicate is "is" and the adverbial modifier is "when", therefore the answer type is "Date". Unfortunately, some interrogative words like "Which", "What", or "How" do not correspond to unambiguous answer types: Each can represent more than one type. In situations like this, other words in the question need to be considered. A lexical dictionary such as
WordNet can be used for understanding the context. Once the system identifies the question type, it uses an
information retrieval system to find a set of documents that contain the correct keywords. A
tagger and
NP/Verb Group chunker can verify whether the correct entities and relations are mentioned in the found documents. For questions such as "Who" or "Where", a
named-entity recogniser finds relevant "Person" and "Location" names from the retrieved documents. A
vector space model can classify the candidate answers. Check if the answer is of the correct type as determined in the question type analysis stage. An inference technique can validate the candidate answers. A score is then given to each of these candidates according to the number of question words it contains and how close these words are to the candidate—the more and the closer the better. The answer is then translated by parsing into a compact and meaningful representation. In the previous example, the expected output answer is "1st Oct."
Mathematics An open-source, math-aware, question answering system called
MathQA, based on
Ask Platypus and
Wikidata, was published in 2018. MathQA takes an English or Hindi natural language question as input and returns a mathematical formula retrieved from Wikidata as a succinct answer, translated into a computable form that allows the user to insert values for the variables. The system retrieves names and values of variables and common constants from Wikidata if those are available. It is claimed that the system outperforms a commercial computational mathematical knowledge engine on a test set. MathQA methods need to combine natural and formula language. One possible approach is to perform supervised annotation via
Entity Linking. The "ARQMath Task" at
CLEF 2020 was launched to address the problem of linking newly posted questions from the platform Math
Stack Exchange to existing ones that were already answered by the community. Providing hyperlinks to already answered, semantically related questions helps users to get answers earlier but is a challenging problem because semantic relatedness is not trivial. The lab was motivated by the fact that 20% of mathematical queries in general-purpose search engines are expressed as well-formed questions. The challenge contained two separate sub-tasks. Task 1: "Answer retrieval" matching old post answers to newly posed questions, and Task 2: "Formula retrieval" matching old post formulae to new questions. Starting with the domain of mathematics, which involves formula language, the goal is to later extend the task to other domains (e.g., STEM disciplines, such as chemistry, biology, etc.), which employ other types of special notation (e.g., chemical formulae). The formulae are then rearranged to generate a set of formula variants. Subsequently, the variables are substituted with random values to generate a large number of different questions suitable for individual student tests. == Applications ==