The presentation of natural deduction so far has concentrated on the nature of propositions without giving a formal definition of a
proof. To formalise the notion of proof, we alter the presentation of hypothetical derivations slightly. We label the antecedents with
proof variables (from some countable set
V of variables), and decorate the succedent with the actual proof. The antecedents or
hypotheses are separated from the succedent by means of a
turnstile (⊢). This modification sometimes goes under the name of
localised hypotheses. The following diagram summarises the change. The collection of hypotheses will be written as Γ when their exact composition is not relevant. To make proofs explicit, we move from the proof-less judgment "
A" to a judgment: "π
is a proof of (A)", which is written symbolically as "π :
A". Following the standard approach, proofs are specified with their own formation rules for the judgment "π
proof". The simplest possible proof is the use of a labelled hypothesis; in this case the evidence is the label itself. Let us re-examine some of the connectives with explicit proofs. For conjunction, we look at the introduction rule ∧I to discover the form of proofs of conjunction: they must be a pair of proofs of the two conjuncts. Thus: The elimination rules ∧E1 and ∧E2 select either the left or the right conjunct; thus the proofs are a pair of projections—first (
fst) and second (
snd). For implication, the introduction form localises or
binds the hypothesis, written using a λ; this corresponds to the discharged label. In the rule, "Γ,
u:
A" stands for the collection of hypotheses Γ, together with the additional hypothesis
u. With proofs available explicitly, one can manipulate and reason about proofs. The key operation on proofs is the substitution of one proof for an assumption used in another proof. This is commonly known as a
substitution theorem, and can be proved by
induction on the depth (or structure) of the second judgment.
Substitution theorem :
If Γ ⊢ π1 :
A and Γ,
u:
A ⊢ π2 :
B,
then Γ ⊢ [π1/
u] π2 : B. So far the judgment "Γ ⊢ π :
A" has had a purely logical interpretation. In
type theory, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as
types, and proofs as programs in the
lambda calculus. Thus the interpretation of "π :
A" is "
the program π has type
A". The logical connectives are also given a different reading: conjunction is viewed as
product (×), implication as the function
arrow (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as
simple type theory from the previous sections. The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as
canonical forms or
values. If every program can be reduced to a canonical form, then the type theory is said to be
normalising (or
weakly normalising). If the canonical form is unique, then the theory is said to be
strongly normalising. Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that almost every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥". For this reason, the
propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic.
Example: Dependent Type Theory Like logic, type theory has many extensions and variants, including first-order and higher-order versions. One branch, known as
dependent type theory, is used in a number of
computer-assisted proof systems. Dependent type theory allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules: These types are generalisations of the arrow and product types, respectively, as witnessed by their introduction and elimination rules. Dependent type theory in full generality is very powerful: it is able to express almost any conceivable property of programs directly in the types of the program. This generality comes at a steep price — either typechecking is undecidable (
extensional type theory), or extensional reasoning is more difficult (
intensional type theory). For this reason, some dependent type theories do not allow quantification over arbitrary programs, but rather restrict to programs of a given decidable
index domain, for example integers, strings, or linear programs. Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as
parametric polymorphism; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called
predicative polymorphism; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as
impredicative polymorphism. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the
lambda cube of
Henk Barendregt. The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a
logical framework. Popular modern logical frameworks such as the
calculus of constructions and
LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach. ==Classical and modal logics==