Epistemic logic is a
modal logic dealing with the notions of knowledge and belief. As a
logic, it is concerned with understanding the process of
reasoning about knowledge and belief: which principles relating the notions of knowledge and belief are intuitively plausible? Like epistemology, it stems from the Greek word \epsilon\pi\iota\sigma\tau\eta\mu\eta or ‘episteme’ meaning knowledge.
Epistemology is nevertheless more concerned with analyzing the very
nature and
scope of knowledge, addressing questions such as “What is the definition of knowledge?” or “How is knowledge acquired?”. In fact, epistemic logic grew out of epistemology in the Middle Ages thanks to the efforts of Burley and Ockham. The formal work, based on modal logic, that inaugurated contemporary research into epistemic logic dates back only to 1962 and is due to
Hintikka. It then sparked in the 1960s discussions about the principles of knowledge and belief and many axioms for these notions were proposed and discussed. For example, the interaction axioms K p\rightarrow B p and B p\rightarrow KB p are often considered to be intuitive principles: if an agent Knows p then (s)he also Believes p, or if an agent Believes p, then (s)he Knows that (s)he Believes p. More recently, these kinds of philosophical theories were taken up by researchers in
economics,
artificial intelligence and
theoretical computer science where reasoning about knowledge is a central topic. Due to the new setting in which epistemic logic was used, new perspectives and new features such as
computability issues were then added to the research agenda of epistemic logic.
Syntax In the sequel, AGTS=\{1,\ldots,n\} is a
finite set whose elements are called agents and PROP is a set of propositional letters. The epistemic language is an extension of the basic multi-modal language of modal logic with a
common knowledge operator C_{A} and a
distributed knowledge operator D_{A}. Formally, the
epistemic language \mathcal{L}_{\textsf{EL}}^{C} is defined inductively by the following
grammar in
BNF: \mathcal{L}_{\textsf{EL}}^{C}:\phi~~::=~~ p~\mid~\neg\phi~\mid~(\phi\land\phi)~\mid~ K_j\phi~\mid~ C_{A}\phi ~\mid~ D_{A}\phi where p\in PROP, j\in {AGTS} and A\subseteq {AGTS}. The
basic epistemic language \mathcal{L}_{EL} is the language \mathcal{L}_{EL}^{C} without the common knowledge and distributed knowledge operators. The formula \bot is an abbreviation for \neg p \land p (for a given p\in PROP), \langle K_{j}\rangle\phi is an abbreviation for \neg K_j\neg\phi, E_{A}\phi is an abbreviation for \bigwedge\limits_{j\in A} K_j\phi and C\phi an abbreviation for C_{AGTS}\phi.
Group notions: general, common and distributed knowledge. In a multi-agent setting there are three important epistemic concepts: general knowledge, distributed knowledge and common knowledge. The notion of common knowledge was first studied by
Lewis in the context of conventions. It was then applied to
distributed systems where it allows to express that the rationality of the players, the rules of the game and the set of players are commonly known.
General knowledge. General knowledge of \phi means that everybody in the group of agents {AGTS} knows that \phi. Formally, this corresponds to the following formula: E\phi:=\underset{j\in {AGTS}}\bigwedge K_j\phi.
Common knowledge. Common knowledge of \phi means that everybody knows \phi but also that everybody knows that everybody knows \phi, that everybody knows that everybody knows that everybody knows \phi, and so on
ad infinitum. Formally, this corresponds to the following formula C\phi:=E\phi\land E E\phi\land E E E\phi\land\ldots As we do not allow infinite conjunction the notion of common knowledge will have to be introduced as a primitive in our language. Before defining the language with this new operator, we are going to give an example introduced by
Lewis that illustrates the difference between the notions of general knowledge and common knowledge. Lewis wanted to know what kind of knowledge is needed so that the statement p: “every driver must drive on the right” be a convention among a group of agents. In other words, he wanted to know what kind of knowledge is needed so that everybody feels safe to drive on the right. Suppose there are only two agents i and j. Then everybody knowing p (formally E p) is not enough. Indeed, it might still be possible that the agent i considers possible that the agent j does not know p (formally \neg K_i K_j p). In that case the agent i will not feel safe to drive on the right because he might consider that the agent j, not knowing p, could drive on the left. To avoid this problem, we could then assume that everybody knows that everybody knows that p (formally E E p). This is again not enough to ensure that everybody feels safe to drive on the right. Indeed, it might still be possible that agent i considers possible that agent j considers possible that agent i does not know p (formally \neg K_i K_j K_i p). In that case and from i’s point of view, j considers possible that i, not knowing p, will drive on the left. So from i’s point of view, j might drive on the left as well (by the same argument as above). So i will not feel safe to drive on the right. Reasoning by induction, Lewis showed that for any k\in \mathbb{N}, E p\land E^1 p\land \ldots \land E^k p is not enough for the drivers to feel safe to drive on the right. In fact what we need is an infinite conjunction. In other words, we need common knowledge of p: C p.
Distributed knowledge. Distributed knowledge of \phi means that if the agents pulled their knowledge altogether, they would know that \phi holds. In other words, the knowledge of \phi is
distributed among the agents. The formula D_{A}\phi reads as ‘it is distributed knowledge among the set of agents A that \phi holds’.
Semantics Epistemic logic is a modal logic. So, what we call an
epistemic model \mathcal{M}=(W, R_1,\ldots, R_n,I) is just a
Kripke model as defined in modal logic. The set W is a non-empty set whose elements are called
possible worlds and the
interpretation I:W\rightarrow 2^{PROP} is a
function specifying which propositional facts (such as ‘Ann has the red card’) are true in each of these worlds. The
accessibility relations R_j\subseteq W\times W are
binary relations for each agent j\in AGTS; they are intended to capture the uncertainty of each agent (about the actual world and about the other agents' uncertainty). Intuitively, we have (w,v)\in R_j when the world v is compatible with agent j’s information in world w or, in other words, when agent j considers that world v might correspond to the world w (from this standpoint). We abusively write w\in\mathcal{M} for w\in W and R_j(w) denotes the set of worlds \{v\in W; (w,v)\in R_j\}. Intuitively, a
pointed epistemic model (\mathcal{M},w), where w\in\mathcal{M}, represents from an external point of view how the actual world w is perceived by the agents {AGTS}. For every epistemic model \mathcal{M}, every w\in \mathcal{M} and every \phi\in\mathcal{L}_{\textsf{EL}}, we define \mathcal{M},w\models\phi inductively by the following
truth conditions: where \left(\underset{j\in A}{\bigcup}R_j\right)^+ is the
transitive closure of \underset{j\in A}{\bigcup}R_j: we have that v\in\left(\underset{j\in A}{\bigcup}R_j\right)^+(w) if, and only if, there are w_0,\ldots,w_m\in\mathcal{M} and j_1,\ldots,j_m\in A such that w_0=w, w_m=v and for all i\in\{1,\ldots,m\}, w_{i-1} R_{j_i} w_i. Despite the fact that the notion of common belief has to be introduced as a primitive in the language, we can notice that the definition of epistemic models does not have to be modified in order to give
truth value to the common knowledge and distributed knowledge operators.
Card Example: Players A, B and C (standing for Ann, Bob and Claire) play a card game with three cards: a red one, a green one and a blue one. Each of them has a single card but they do not know the cards of the other players. Ann has the red card, Bob has the green card and Claire has the blue card. This example is depicted in the pointed epistemic model (\mathcal{M},w) represented below. In this example, AGTS:=\{A,B,C\} and PROP:=\{{\color{red}{A}},{\color{green}{B}},{\color{blue}{C}},{\color{red}{B}},{\color{green}{C}},{\color{blue}{A}},{\color{red}{C}},{\color{green}{A}},{\color{blue}{B}}\}. Each world is labelled by the propositional letters which are true in this world and w corresponds to the actual world. There is an arrow indexed by agent j\in\{A,B,C\} from a possible world u to a possible world v when (u,v)\in R_j. Reflexive arrows are omitted, which means that for all j\in \{A,B,C\} and all v\in \mathcal{M}, we have that (v,v)\in R_j. {\color{red}{A}} stands for : "A has the red card'' {\color{blue}{C}} stand for: "C has the blue card'' {\color{green}{B}} stands for: "B has the green card'' and so on... When accessibility relations are equivalence relations (like in this example) and we have that (w,v)\in R_j, we say that agent j
cannot distinguish world w from world v (or world w is indistinguishable from world v for agent j). So, for example, A cannot distinguish the actual world w from the possible world where B has the blue card ({\color{blue}{B}}), C has the green card ({\color{green}{C}}) and A still has the red card ({\color{red}{A}}). In particular, the following statements hold: \mathcal{M},w\models({\color{red}{A}}\land K_A{\color{red}{A}})\land({\color{blue}{C}}\land K_C{\color{blue}{C}})\land ({\color{green}{B}}\land K_B{\color{green}{B}}) 'All the agents know the color of their card'. \mathcal{M},w\models K_A({\color{blue}{B}}\vee{\color{green}{B}})\land K_A({\color{blue}{C}}\vee{\color{green}{C}}) 'A knows that B has either the blue or the green card and that C has either the blue or the green card'. \mathcal{M},w\models E({\color{red}{A}}\vee{\color{blue}{A}}\vee{\color{green}{A}})\land C({\color{red}{A}}\vee{\color{blue}{A}}\vee{\color{green}{A}}) 'Everybody knows that A has either the red, green or blue card and this is even common knowledge among all agents'.
Knowledge versus Belief We use the same notation K_j for both knowledge and belief. Hence, depending on the context, K_j\phi will either read ‘the agent j
Knows that \phi holds’ or ‘the agent j
Believes that \phi holds’. A crucial difference is that, unlike knowledge, beliefs can be
wrong: the axiom K_j\phi\rightarrow \phi holds only for knowledge, but not necessarily for belief. This axiom called axiom T (for Truth) states that if the agent knows a proposition, then this proposition is true. It is often considered to be the hallmark of knowledge and it has not been subjected to any serious attack ever since its introduction in the
Theaetetus by
Plato. The notion of knowledge might comply to some other constraints (or axioms) such as K_j\phi\rightarrow K_j K_j\phi: if agent j knows something, she knows that she knows it. These constraints might affect the nature of the accessibility relations R_j which may then comply to some extra properties. So, we are now going to define some particular classes of epistemic models that all add some extra constraints on the accessibility relations R_j. These constraints are matched by particular axioms for the knowledge operator K_j. Below each property, we give the axiom which
defines the class of epistemic frames that fulfill this property. (K\phi stands for K_j\phi for any j\in AGTS.) We discuss the axioms above. Axiom 4 states that if the agent knows a proposition, then she knows that she knows it (this axiom is also known as the “KK-principle”or “KK-thesis”). In epistemology, axiom 4 tends to be accepted by
internalists, but not by
externalists. Axiom 4 is nevertheless widely accepted by computer scientists (but also by many philosophers, including
Plato,
Aristotle,
Saint Augustine,
Spinoza and
Schopenhauer, as
Hintikka recalls ). A more controversial axiom for the logic of knowledge is axiom 5 for Euclidicity: this axiom states that if the agent does not know a proposition, then she knows that she does not know it. Most philosophers (including Hintikka) have attacked this axiom, since numerous examples from everyday life seem to invalidate it. In general, axiom 5 is invalidated when the agent has mistaken beliefs, which can be due for example to misperceptions, lies or other forms of deception. Axiom B states that it cannot be the case that the agent considers it possible that she knows a false proposition (that is, \neg(\neg\phi\land\neg K\neg K\phi)). If we assume that axioms T and 4 are valid, then axiom B falls prey to the same attack as the one for axiom 5 since this axiom is derivable. Axiom D states that the agent's beliefs are consistent. In combination with axiom K (where the knowledge operator is replaced by a belief operator), axiom D is in fact equivalent to a simpler axiom D' which conveys, maybe more explicitly, the fact that the agent's beliefs cannot be inconsistent: \neg B \bot. The other intricate axioms .2, .3, .3.2 and .4 have been introduced by epistemic logicians such as Lenzen and Kutchera in the 1970s
Axiomatization The Hilbert
proof system K for the basic modal logic is defined by the following
axioms and
inference rules: for all j\in AGTS, The axioms of an epistemic logic obviously display the way the agents reason. For example, the axiom K together with the rule of inference Nec entail that if I know \phi (K\phi) and I know that \phi implies \psi (K(\phi\rightarrow \psi)) then I know that \psi (K\psi). Stronger constraints can be added. The following
proof systems for \mathcal{L}_{\textsf{EL}} are often used in the literature. We define the set of proof systems \mathbb{L}_{\textsf{EL}}:=\{\textsf{K}, \textsf{KD45},\textsf{S4},\textsf{S4.2},\textsf{S4.3},\textsf{S4.3.2},\textsf{S4.4},\textsf{S5}\}. Moreover, for all \mathcal{H}\in\mathbb{L}_{\textsf{EL}}, we define the proof system \mathcal{H}^{\textsf{C}} by adding the following
axiom schemes and
rules of inference to those of \mathcal{H}. For all A\subseteq AGTS, The relative strength of the proof systems for knowledge is as follows: \textsf{S4}\subset \textsf{S4.2}\subset \textsf{S4.3}\subset\textsf{S4.3.2}\subset\textsf{S4.4}\subset \textsf{S5}. So, all the
theorems of \textsf{S4.2} are also theorems of \textsf{S4.3}, \textsf{S4.3.2}, \textsf{S4.4} and \textsf{S5}. Many philosophers claim that in the most general cases, the logic of knowledge is \textsf{S4.2} or \textsf{S4.3}. Typically, in computer science and in many of the theories developed in artificial intelligence, the logic of belief (
doxastic logic) is taken to be \textsf{KD45} and the logic of knowledge (
epistemic logic) is taken to be \textsf{S5}, even if \textsf{S5} is only suitable for situations where the agents do not have mistaken beliefs. For all \mathcal{H}\in\mathbb{L}_{\textsf{EL}}, the
class of \mathcal{H}–models or
\mathcal{H}^{\textsf{C}}–models is the class of epistemic models whose accessibility relations satisfy the properties listed above defined by the axioms of \mathcal{H} or \mathcal{H}^{\textsf{C}}. Then, for all \mathcal{H}\in\mathbb{L}_{\textsf{EL}}, \mathcal{H} is
sound and
strongly complete for \mathcal{L}_{\textsf{EL}} w.r.t. the class of \mathcal{H}–models, and \mathcal{H}^{\textsf{C}} is
sound and
strongly complete for \mathcal{L}_{\textsf{EL}}^{\textsf{C}} w.r.t. the class of \mathcal{H}^{\textsf{C}}–models.
Decidability and Complexity The
satisfiability problem for all the logics introduced is
decidable. We list below the
computational complexity of the satisfiability problem for each of them. Note that it becomes linear in time if there are only finitely many propositional letters in the language. For n\geq 2, if we restrict to finite nesting, then the satisfiability problem is
NP-complete for all the modal logics considered. If we then further restrict the language to having only finitely many primitive propositions, the complexity goes down to linear in time in all cases. The computational complexity of the
model checking problem is in
P in all cases. == Adding dynamics ==