Inference engines AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of
expert systems and later
business rule engines. More recent work on
automated theorem proving has had a stronger basis in formal logic. An inference system's job is to extend a knowledge base automatically. The
knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are
relevant to its task. Additionally, the term 'inference' has also been applied to the process of generating predictions from trained
neural networks. In this context, an 'inference engine' refers to the system or hardware performing these operations. This type of inference is widely used in applications ranging from
image recognition to
natural language processing.
Prolog engine Prolog (for "Programming in Logic") is a
programming language based on a
subset of
predicate calculus. Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called
backward chaining. Let us return to our
Socrates syllogism. We enter into our Knowledge Base the following piece of code: mortal(X) :- man(X). man(socrates). ( Here
:- can be read as "if". Generally, if
P Q (if P then Q) then in Prolog we would code
Q:-P (Q if P).) This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates: ?- mortal(socrates). (where
?- signifies a query: Can
mortal(socrates). be deduced from the KB using the rules) gives the answer "Yes". On the other hand, asking the Prolog system the following: ?- mortal(plato). gives the answer "No". This is because
Prolog does not know anything about
Plato, and hence defaults to any property about Plato being false (the so-called
closed world assumption). Finally ?- mortal(X) (Is anything mortal) would result in "Yes" (and in some implementations: "Yes": X=socrates)
Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.
Semantic web Recently automatic reasoners found in
semantic web a new field of application. Being based upon
description logic, knowledge expressed using one variant of
OWL can be logically processed, i.e., inferences can be made upon it.
Bayesian statistics and probability logic Philosophers and scientists who follow the
Bayesian framework for inference use the mathematical rules of
probability to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following
E. T. Jaynes). Bayesians identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely. Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see
Bayesian decision theory). A central rule of Bayesian inference is
Bayes' theorem.
Fuzzy logic Non-monotonic logic A relation of inference is
monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is
non-monotonic. Deductive inference is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added. By contrast, everyday reasoning is mostly non-monotonic because it involves risk: we jump to conclusions from deductively insufficient premises. We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of
abduction, inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence. ==See also==