One way of defining linear logic is as a
sequent calculus. We use the letters and to range over finite
lists of propositions , also called
contexts. A
sequent places a context to the left and the right of the
turnstile, written . Intuitively, the sequent asserts that the conjunction of
entails the disjunction of (though we mean the "multiplicative" conjunction and disjunction, as explained below). Girard describes classical linear logic using only
one-sided sequents (where the left-hand context is empty), and we follow here that more economical presentation. This is possible because any premises to the left of a turnstile can always be moved to the other side and dualised. We now give
inference rules describing how to build proofs of sequents. First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule of
exchange: Note that we do
not add the structural rules of weakening and contraction, because we do care about the absence of propositions in a sequent, and the number of copies present. Next we add
initial sequents and
cuts: The cut rule can be seen as a way of composing proofs, and initial sequents serve as the
units for composition. In a certain sense these rules are redundant: as we introduce additional rules for building proofs below, we will obtain the property that arbitrary initial sequents can be derived from atomic initial sequents, and that whenever a sequent is provable it can be given a cut-free proof. Ultimately, this
canonical form property (which can be divided into the
completeness of atomic initial sequents and the
cut-elimination theorem, inducing a notion of
analytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-aware
lambda-calculus. Now, we explain the connectives by giving
logical rules. Typically in sequent calculus one gives both "right-rules" and "left-rules" for each connective, essentially describing two modes of reasoning about propositions involving that connective (e.g., verification and falsification). In a one-sided presentation, one instead makes use of negation: the right-rules for a connective (say ⅋) effectively play the role of left-rules for its dual (⊗). So, we should expect a certain
"harmony" between the rule(s) for a connective and the rule(s) for its dual.
Multiplicatives The rules for multiplicative conjunction (⊗) and disjunction (⅋): and for their units: Observe that the rules for multiplicative conjunction and disjunction are
admissible for plain
conjunction and
disjunction under a classical interpretation (i.e., they are admissible rules in
LK).
Additives The rules for additive conjunction (&) and disjunction (⊕): and for their units: Observe that the rules for additive conjunction and disjunction are again admissible under a classical interpretation. But now we can explain the basis for the multiplicative/additive distinction in the rules for the two different versions of conjunction: for the multiplicative connective (⊗), the context of the conclusion () is split up between the premises, whereas for the additive case connective (&) the context of the conclusion () is carried whole into both premises.
Exponentials The exponentials are used to give controlled access to weakening and contraction. Specifically, we add structural rules of weakening and contraction for 'd propositions: and use the following logical rules, in which stands for a list of propositions each prefixed with : One might observe that the rules for the exponentials follow a different pattern from the rules for the other connectives, resembling the inference rules governing modalities in sequent calculus formalisations of the
normal modal logic S4, and that there is no longer such a clear symmetry between the duals and . This situation is remedied in alternative presentations of CLL (e.g., the
LU presentation). ==Remarkable formulas==