Definitions In Lamport's original paper, the
entering variable is known as
choosing, and the following conditions apply: • Words choosing [i] and number [i] are in the memory of process i, and are initially zero. • The range of values of number [i] is unbounded. • A process may fail at any time. We assume that when it fails, it immediately goes to its noncritical section and halts. There may then be a period when reading from its memory gives arbitrary values. Eventually, any read from its memory must give a value of zero.
Code examples Pseudocode In this example, all threads execute the same "main" function,
Thread. In real applications, different threads often have different "main" functions. Note that as in the original paper, the thread checks itself before entering the critical section. Since the loop conditions will evaluate as
false, this does not cause much delay. // declaration and initial values of global variables Entering: array [1..NUM_THREADS] of bool = {false}; Number: array [1..NUM_THREADS] of integer = {0}; lock(integer i) { Entering[i] = true; Number[i] = 1 + max(Number[1], ..., Number[NUM_THREADS]); Entering[i] = false; for (integer j = 1; j Each thread only writes its own storage, only reads are shared. It is remarkable that this algorithm is not built on top of some lower level "atomic" operation, e.g.
compare-and-swap. The original proof shows that for overlapping reads and writes to the same storage cell only the write must be correct. The read operation can return an arbitrary number. Therefore, this algorithm can be used to implement mutual exclusion on memory that lacks synchronisation primitives, e.g., a simple
SCSI disk shared between two computers. The necessity of the variable
Entering might not be obvious as there is no 'lock' around lines 7 to 13. However, suppose the variable was removed and two processes computed the same Number[i]. If the higher-priority process was preempted before setting Number[i], the low-priority process will see that the other process has a number of zero, and enters the critical section; later, the high-priority process will ignore equal Number[i] for lower-priority processes, and also enters the critical section. As a result, two processes can enter the critical section at the same time. The bakery algorithm uses the
Entering variable to make the assignment on line 6 look like it was atomic; process
i will never see a number equal to zero for a process
j that is going to pick the same number as
i. When implementing the pseudo code in a single process system or under
cooperative multitasking, it is better to replace the "do nothing" sections with code that notifies the
operating system to immediately switch to the next thread. This primitive is often referred to as yield. Lamport's bakery algorithm assumes a
sequential consistency memory model. Few, if any, languages or multi-core processors implement such a memory model. Therefore, correct implementation of the algorithm typically requires inserting fences to inhibit reordering. ====
PlusCal code ==== We declare N to be the number of processes, and we assume that N is a
natural number. CONSTANT N ASSUME N \in Nat We define P to be the set {1, 2, ... , N} of processes. P == 1..N The variables num and flag are declared as global. --algorithm AtomicBakery { variable num = [i \in P |-> 0], flag = [i \in P |-> FALSE]; The following defines to be true iff is less than or equal to in the usual
lexicographical ordering. define { LL(j, i) == \/ num[j] For each element in P there is a process with local variables unread, max and nxt. Steps between consecutive labels p1, ..., p7, cs are considered atomic. The statement with {{code|(x \in S) { body }|tex}} sets id to a nondeterministically chosen element of the set S and then executes body. A step containing the statement await expr can be executed only when the value of expr is . process (p \in P) variables unread \in SUBSET P, max \in Nat, nxt \in P; { p1: while (TRUE) { unread := P \ {self} ; max := 0; flag[self] := TRUE; p2: while (unread # {}) { with (i \in unread) { unread := unread \ {i}; if (num[i] > max) { max := num[i]; } } }; p3: num[self] := max + 1; p4: flag[self] := FALSE; unread := P \ {self} ; p5: while (unread # {}) { with (i \in unread) { nxt := i ; }; await ~ flag[nxt]; p6: await \/ num[nxt] = 0 \/ LL(self, nxt) ; unread := unread \ {nxt}; } ; cs: skip ; \* the critical section; p7: num[self] := 0; }} }
Java code We use the AtomicIntegerArray class not for its built in atomic operations but because its get and set methods work like volatile reads and writes. Under the
Java Memory Model this ensures that writes are immediately visible to all threads. AtomicIntegerArray ticket = new AtomicIntegerArray(threads); // ticket for threads in line, n - number of threads // Java initializes each element of 'ticket' to 0 AtomicIntegerArray entering = new AtomicIntegerArray(threads); // 1 when thread entering in line // Java initializes each element of 'entering' to 0 public void lock(int pid) // thread ID { entering.set(pid, 1); int max = 0; for (int i = 0; i max) { max = current; } } ticket.set(pid, 1 + max); entering.set(pid, 0); for (int i = 0; i == See also ==