Logic programs enjoy a rich variety of semantics and problem solving methods, as well as a wide range of applications in programming, databases, knowledge representation and problem solving. ===Algorithm = Logic + Control=== The procedural interpretation of logic programs, which uses backward reasoning to reduce goals to subgoals, is a special case of the use of a problem-solving strategy to
control the use of a declarative,
logical representation of knowledge to obtain the behaviour of an
algorithm. More generally, different problem-solving strategies can be applied to the same logical representation to obtain different algorithms. Alternatively, different algorithms can be obtained with a given problem-solving strategy by using different logical representations. The two main problem-solving strategies are
backward reasoning (goal reduction) and
forward reasoning, also known as top-down and bottom-up reasoning, respectively. In the simple case of a propositional Horn clause program and a top-level atomic goal, backward reasoning determines an
and-or tree, which constitutes the search space for solving the goal. The top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or". Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal are considered at a time. For example, subgoals can be solved in parallel, and clauses can also be tried in parallel. The first strategy is called '
and the second strategy is called '. Other search strategies, such as intelligent backtracking, or best-first search to find an optimal solution, are also possible. In the more general, non-propositional case, where sub-goals can share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently instantiated so that only one procedure applies. Such strategies are used, for example, in
concurrent logic programming. In most cases, backward reasoning from a query or goal is more efficient than forward reasoning. But sometimes with Datalog and Answer Set Programming, there may be no query that is separate from the set of clauses as a whole, and then generating all the facts that can be derived from the clauses is a sensible problem-solving strategy. Here is another example, where forward reasoning beats backward reasoning in a more conventional computation task, where the goal ?- fibonacci(n, Result) is to find the nth fibonacci number: fibonacci(0, 0). fibonacci(1, 1). fibonacci(N, Result) :- N > 1, N1 is N - 1, N2 is N - 2, fibonacci(N1, F1), fibonacci(N2, F2), Result is F1 + F2. Here the relation fibonacci(N, M) stands for the function fibonacci(N) = M, and the predicate N is Expression is Prolog notation for the predicate that instantiates the variable N to the value of Expression. Given the goal of computing the fibonacci number of n, backward reasoning reduces the goal to the two subgoals of computing the fibonacci numbers of n-1 and n-2. It reduces the subgoal of computing the fibonacci number of n-1 to the two subgoals of computing the fibonacci numbers of n-2 and n-3, redundantly computing the fibonacci number of n-2. This process of reducing one fibonacci subgoal to two fibonacci subgoals continues until it reaches the numbers 0 and 1. Its complexity is of the order 2n. In contrast, forward reasoning generates the sequence of fibonacci numbers, starting from 0 and 1 without any recomputation, and its complexity is linear with respect to n. Prolog cannot perform forward reasoning directly. But it can achieve the effect of forward reasoning within the context of backward reasoning by means of
tabling: Subgoals are maintained in a table, along with their solutions. If a subgoal is re-encountered, it is solved directly by using the solutions already in the table, instead of re-solving the subgoals redundantly.
Relationship with functional programming Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). In this respect, logic programs are similar to
relational databases, which also represent functions as relations. Compared with relational syntax, functional syntax is more compact for nested functions. For example, in functional syntax the definition of maternal grandmother can be written in the nested form: maternal_grandmother(X) = mother(mother(X)). The same definition in relational notation needs to be written in the unnested, flattened form: maternal_grandmother(X, Y) :- mother(X, Z), mother(Z, Y). However, nested syntax can be regarded as syntactic sugar for unnested syntax.
Ciao Prolog, for example, transforms functional syntax into relational form and executes the resulting logic program using the standard Prolog execution strategy. Moreover, the same transformation can be used to execute nested relations that are not functional. For example: grandparent(X) := parent(parent(X)). parent(X) := mother(X). parent(X) := father(X). mother(charles) := elizabeth. father(charles) := phillip. mother(harry) := diana. father(harry) := charles. ?- grandparent(X,Y). X = harry, Y = elizabeth. X = harry, Y = phillip.
Relationship with relational programming The term
relational programming has been used to cover a variety of programming languages that treat functions as a special case of relations. Some of these languages, such as
miniKanren are logic programming languages in the sense of this article. However, the relational language RML is an imperative programming language whose core construct is a relational expression, which is similar to an expression in first-order predicate logic. Other relational programming languages are based on the relational calculus or relational algebra.
Semantics of Horn clause programs Viewed in purely logical terms, there are two approaches to the declarative semantics of Horn clause logic programs: One approach is the original
logical consequence semantics, which understands solving a goal as showing that the goal is a theorem that is true in all
models of the program. In this approach, computation is
theorem-proving in
first-order logic; and both
backward reasoning, as in SLD resolution, and
forward reasoning, as in hyper-resolution, are correct and complete theorem-proving methods. Sometimes such theorem-proving methods are also regarded as providing a separate
proof-theoretic (or operational) semantics for logic programs. But from a logical point of view, they are proof methods, rather than semantics. The other approach to the declarative semantics of Horn clause programs is the
satisfiability semantics, which understands solving a goal as showing that the goal is true (or satisfied) in some
intended (or standard) model of the program. For Horn clause programs, there always exists such a standard model: It is the unique
minimal model of the program. Informally speaking, a minimal model is a model that, when it is viewed as the set of all (variable-free) facts that are true in the model, contains no smaller set of facts that is also a model of the program. For example, the following facts represent the minimal model of the family relationships example in the introduction of this article. All other variable-free facts are false in the model: mother_child(elizabeth, charles). father_child(charles, william). father_child(charles, harry). parent_child(elizabeth, charles). parent_child(charles, william). parent_child(charles, harry). grandparent_child(elizabeth, william). grandparent_child(elizabeth, harry). The satisfiability semantics also has an alternative, more mathematical characterisation as the
least fixed point of the function that uses the rules in the program to derive new facts from existing facts in one step of inference. Remarkably, the same problem-solving methods of forward and backward reasoning, which were originally developed for the logical consequence semantics, are equally applicable to the satisfiability semantics: Forward reasoning generates the minimal model of a Horn clause program, by deriving new facts from existing facts, until no new additional facts can be generated. Backward reasoning, which succeeds by reducing a goal to subgoals, until all subgoals are solved by facts, ensures that the goal is true in the minimal model, without generating the model explicitly. The difference between the two declarative semantics can be seen with the definitions of addition and multiplication in
successor arithmetic, which represents the natural numbers 0, 1, 2, ... as a sequence of terms of the form 0, s(0), s(s(0)), .... In general, the term s(X) represents the successor of X, namely X + 1. Here are the standard definitions of addition and multiplication in functional notation: X + 0 = X. X + s(Y) = s(X + Y). i.e. X + (Y + 1) = (X + Y) + 1 X × 0 = 0. X × s(Y) = X + (X × Y). i.e. X × (Y + 1) = X + (X × Y). Here are the same definitions as a logic program, using add(X, Y, Z) to represent X + Y = Z, and multiply(X, Y, Z) to represent X × Y = Z: add(X, 0, X). add(X, s(Y), s(Z)) :- add(X, Y, Z). multiply(X, 0, 0). multiply(X, s(Y), W) :- multiply(X, Y, Z), add(X, Z, W). The two declarative semantics both give the same answers for the same existentially quantified conjunctions of addition and multiplication goals. For example 2 × 2 = X has the solution X = 4; and X × X = X + X has two solutions X = 0 and X = 2: ?- multiply(s(s(0)), s(s(0)), X). X = s(s(s(s(0)))). ?- multiply(X, X, Y), add(X, X, Y). X = 0, Y = 0. X = s(s(0)), Y = s(s(s(s(0)))). However, with the logical-consequence semantics, there are non-standard models of the program, in which, for example, add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))), i.e. 2 + 2 = 5 is true. But with the satisfiability semantics, there is only one model, namely the standard model of arithmetic, in which 2 + 2 = 5 is false. In both semantics, the goal ?- add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))) fails. In the satisfiability semantics, the failure of the goal means that the truth value of the goal is false. But in the logical consequence semantics, the failure means that the truth value of the goal is unknown.
Negation as failure Negation as failure (NAF), as a way of concluding that a negative condition not p holds by showing that the positive condition p fails to hold, was already a feature of early Prolog systems. The resulting extension of
SLD resolution is called
SLDNF. A similar construct, called "thnot", also existed in
Micro-Planner. The logical semantics of NAF was unresolved until
Keith Clark showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the logical consequence semantics using the
completion of a logic program in first-order logic. Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say: :A :- Body1. : ... :A :- Bodyk. as a definition of the predicate: :A iff (Body1 or ... or Bodyk) where iff means "if and only if". The completion also includes axioms of equality, which correspond to
unification. Clark showed that proofs generated by SLDNF are structurally similar to proofs generated by a natural deduction style of reasoning with the completion of the program. Consider, for example, the following program: should_receive_sanction(X, punishment) :- is_a_thief(X), not should_receive_sanction(X, rehabilitation). should_receive_sanction(X, rehabilitation) :- is_a_thief(X), is_a_minor(X), not is_violent(X). is_a_thief(tom). Given the goal of determining whether tom should receive a sanction, the first rule succeeds in showing that tom should be punished: ?- should_receive_sanction(tom, Sanction). Sanction = punishment. This is because tom is a thief, and it cannot be shown that tom should be rehabilitated. It cannot be shown that tom should be rehabilitated, because it cannot be shown that tom is a minor. If, however, we receive new information that tom is indeed a minor, the previous conclusion that tom should be punished is replaced by the new conclusion that tom should be rehabilitated: is_a_minor(tom). ?- should_receive_sanction(tom, Sanction). Sanction = rehabilitation. This property of withdrawing a conclusion when new information is added, is called non-monotonicity, and it makes logic programming a
non-monotonic logic. But, if we are now told that tom is violent, the conclusion that tom should be punished will be reinstated: is_violent(tom). ?- should_receive_sanction(tom, Sanction). Sanction = punishment. The completion of this program is: should_receive_sanction(X, Sanction) iff Sanction = punishment, is_a_thief(X), not should_receive_sanction(X, rehabilitation) or Sanction = rehabilitation, is_a_thief(X), is_a_minor(X), not is_violent(X). is_a_thief(X) iff X = tom. is_a_minor(X) iff X = tom. is_violent(X) iff X = tom. The notion of completion is closely related to
John McCarthy's circumscription semantics for default reasoning, and to
Ray Reiter's closed world assumption. The completion semantics for negation is a logical consequence semantics, for which SLDNF provides a proof-theoretic implementation. However, in the 1980s, the satisfiability semantics became more popular for logic programs with negation. In the satisfiability semantics, negation is interpreted according to the classical definition of truth in an intended or standard model of the logic program. In the case of logic programs with negative conditions, there are two main variants of the satisfiability semantics: In the
well-founded semantics, the intended model of a logic program is a unique, three-valued, minimal model, which always exists. The well-founded semantics generalises the notion of
inductive definition in mathematical logic.
XSB Prolog implements the well-founded semantics using SLG resolution. In an argumentation interpretation of negation, the initial argument that tom should be punished because he is a thief, is attacked by the argument that he should be rehabilitated because he is a minor. But the fact that tom is violent undermines the argument that tom should be rehabilitated and reinstates the argument that tom should be punished.
Metalogic programming Metaprogramming, in which programs are treated as data, was already a feature of early Prolog implementations. For example, the Edinburgh DEC10 implementation of Prolog included "an interpreter and a compiler, both written in Prolog itself".
Paul Thagard includes logic and
rules as alternative approaches to modelling human thinking. He argues that rules, which have the form
IF condition THEN action, are "very similar" to logical conditionals, but they are simpler and have greater psychological plausibility (page 51). Among other differences between logic and rules, he argues that logic uses deduction, but rules use search (page 45) and can be used to reason either forward or backward (page 47). Sentences in logic "have to be interpreted as
universally true", but rules can be
defaults, which admit exceptions (page 44). He states that "unlike logic, rule-based systems can also easily represent strategic information about what to do" (page 45). For example, "IF you want to go home for the weekend, and you have bus fare, THEN you can catch a bus". He does not observe that the same strategy of reducing a goal to subgoals can be interpreted, in the manner of logic programming, as applying backward reasoning to a logical conditional: can_go(you, home) :- have(you, bus_fare), catch(you, bus). All of these characteristics of rule-based systems - search, forward and backward reasoning, default reasoning, and goal-reduction - are also defining characteristics of logic programming. This suggests that Thagard's conclusion (page 56) that: Much of human knowledge is naturally described in terms of rules, and many kinds of thinking such as planning can be modeled by rule-based systems. also applies to logic programming. Other arguments showing how logic programming can be used to model aspects of human thinking are presented by
Keith Stenning and
Michiel van Lambalgen in their book, Human Reasoning and Cognitive Science. They show how the non-monotonic character of logic programs can be used to explain human performance on a variety of psychological tasks. They also show (page 237) that "closed–world reasoning in its guise as logic programming has an appealing neural implementation, unlike classical logic." In The Proper Treatment of Events, Michiel van Lambalgen and Fritz Hamm investigate the use of constraint logic programming to code "temporal notions in natural language by looking at the way human beings construct time".
Knowledge representation The use of logic to represent procedural knowledge and strategic information was one of the main goals contributing to the early development of logic programming. Moreover, it continues to be an important feature of the Prolog family of logic programming languages today. However, many applications of logic programming, including Prolog applications, increasingly focus on the use of logic to represent purely declarative knowledge. These applications include both the representation of general
commonsense knowledge and the representation of domain specific
expertise. Commonsense includes knowledge about cause and effect, as formalised, for example, in the
situation calculus,
event calculus and
action languages. Here is a simplified example, which illustrates the main features of such formalisms. The first clause states that a fact holds immediately after an event initiates (or causes) the fact. The second clause is a
frame axiom, which states that a fact that holds at a time continues to hold at the next time unless it is terminated by an event that happens at the time. This formulation allows more than one event to occur at the same time: holds(Fact, Time2) :- happens(Event, Time1), Time2 is Time1 + 1, initiates(Event, Fact). holds(Fact, Time2) :- happens(Event, Time1), Time2 is Time1 + 1, holds(Fact, Time1), not(terminated(Fact, Time1)). terminated(Fact, Time) :- happens(Event, Time), terminates(Event, Fact). Here holds is a meta-predicate, similar to solve above. However, whereas solve has only one argument, which applies to general clauses, the first argument of holds is a fact and the second argument is a time (or state). The atomic formula holds(Fact, Time) expresses that the Fact holds at the Time. Such time-varying facts are also called
fluents. The atomic formula happens(Event, Time) expresses that the Event happens at the Time. The following example illustrates how these clauses can be used to reason about causality in a toy
blocks world. Here, in the initial state at time 0, a green block is on a table and a red block is stacked on the green block (like a traffic light). At time 0, the red block is moved to the table. At time 1, the green block is moved onto the red block. Moving an object onto a place terminates the fact that the object is on any place, and initiates the fact that the object is on the place to which it is moved: holds(on(green_block, table), 0). holds(on(red_block, green_block), 0). happens(move(red_block, table), 0). happens(move(green_block, red_block), 1). initiates(move(Object, Place), on(Object, Place)). terminates(move(Object, Place2), on(Object, Place1)). ?- holds(Fact, Time). Fact = on(green_block,table), Time = 0. Fact = on(red_block,green_block), Time = 0. Fact = on(green_block,table), Time = 1. Fact = on(red_block,table), Time = 1. Fact = on(green_block,red_block), Time = 2. Fact = on(red_block,table), Time = 2. Forward reasoning and backward reasoning generate the same answers to the goal holds(Fact, Time). But forward reasoning generates fluents
progressively in temporal order, and backward reasoning generates fluents
regressively, as in the domain-specific use of
regression in the
situation calculus. Logic programming has also proved to be useful for representing domain-specific expertise in
expert systems. But human expertise, like general-purpose commonsense, is mostly implicit and
tacit, and it is often difficult to represent such implicit knowledge in explicit rules. This difficulty does not arise, however, when logic programs are used to represent the existing, explicit rules of a business organisation or legal authority. For example, here is a representation of a simplified version of the first sentence of the British Nationality Act, which states that a person who is born in the UK becomes a British citizen at the time of birth if a parent of the person is a British citizen at the time of birth: initiates(birth(Person), citizen(Person, uk)):- time_of(birth(Person), Time), place_of(birth(Person), uk), parent_child(Another_Person, Person), holds(citizen(Another_Person, uk), Time). Historically, the representation of a large portion of the British Nationality Act as a logic program in the 1980s was "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences". More recently, the PROLEG system, initiated in 2009 and consisting of approximately 2500 rules and exceptions of civil code and supreme court case rules in Japan, has become possibly the largest legal rule base in the world. ==Variants and extensions==