The above construction uses the halting oracle, so the two sets A, B might not be enumerable. It can be strengthened: The theorem can be proved by constructing a Turing machine without oracle such that: • It streams out a sequence of finite sets A_0, B_0, A_1, B_1, \dots such that A_n \uparrow A, B_n \uparrow B, and similarly for
B. • All requirements R_{0,A}, R_{0,B}, R_{1,A}, R_{1,A}, \dots are satisfied. • R_{i,A} states that "The
i-th Turing machine, equipped with
B as oracle, fails to decide
A". Similarly for
B. Each R_{i,A} can be satisfied in two ways as before. Negatively without a witness, or positively with a witness.
Finite injury method The idea of the
finite injury method is that we will optimistically hope that the sets we have constructed so far are good enough, and will procrastinate on change for as long as possible, until a requirement gets "injured" (as they must eventually). The injury is positive evidence that our optimism has failed. So we reluctantly update the sets, healing the requirement, update some witnesses, then optimisitically hope all over again. Because there are infinitely many requirements, healing a requirement might
clobber other requirements. Specifically, if we ever change our mind about what should go into A, then that would change the behavior of TM_i^A for some
i, which might then
injure some previously satisfied requirements. To bypass this difficulty, the finite injury method ensures: • Every requirement can be healed infinitely often, because it has an infinite pool of witnesses to choose from. • Every requirement can be injured only finitely often, because we arrange the requirements in a
well-ordered set of priorities R_{0, A} \prec R_{0, B} \prec \cdots, and ensure that every requirement can only be clobbered by requirements prior to it. Thus, all requirements will be satisfied in the end.
Proof We computably partition \N into infinitely many infinite sets. For example, we can define them by repeated bisection:\{0, 2, 4, 6, \dots\} \cup \{1, 5, 9, 13, \dots\} \cup \{3, 11, 19, 27, \dots\} \cup \cdotsWe enumerate them by r_{ij} for i, j \in \N. The idea is that r_{i0}, r_{i1}, \dots are a list of candidate witnesses to R_{i,A}, and R_{i,B}. • At stage 0, we initialize the algorithm by outputting A_0 := \emptyset, B_0 := \emptyset, and assigning witnesses to all requirements: • for R_{i,A}, assign witness w_{i,A} \leftarrow r_{i0}; • for R_{i,B}, assign witness w_{i,B} \leftarrow r_{i0}. • Note that, even though we are performing an infinite number of assignments, this can be done, because the entire assignment is computable. Later, we will update this infinite list of assignments. But we will ensure every update is still computable. • At stage
2n, • We check the health of all requirements R_{i,A} for all i\in 0:2n-1, by simulating TM_i^{B_{2n-1}}(w_{i,A}) for up to 2n-1 steps. • If all these TM_i^{B_{2n-1}}(w_{i,A}) \neq 0 at step 2n-1, either because it hasn't halted yet, or because it halted on a different value, then we have no positive evidence that we need to change anything. So we change nothing. Keep all witnesses the same, and outputA_{2n} := A_{2n-1}, B_{2n} := B_{2n-1} • Otherwise, find the lowest i such that TM_i^{B_{2n-1}}(w_{i,A}) =0. The requirement R_{i,A} is the
priority injured requirement. • Output A_{2n} := A_{2n-1} \cup \{w_{i,A}\} to heal the priority injured requirement, and simultaneously update
all w_{i,B}, w_{i+1,B}, \dots byw_{j,B} \leftarrow \min\{ r_{jk} : k \in \N, r_{jk} > \max(w_{j,B}, \phi_i^{B_{2n-1}}(w_{i,A}))\}to ensure that, if we ever need to heal one of the requirements R_{i,B}, R_{i+1,B} by adding a witness into B, it will not thereby injure R_{i,A}. Nothing else needs to change, so we output B_{2n} := B_{2n-1} and keep all other witnesses the same. • In order to compute the assignment, we simply need to apply all updates in sequence. It is as if applying a list of
software patches, one patch after another. Since each update is computable, the whole witness assignment at this step is still computable. • At stage
2n+1, the construction goes over almost exactly the same way. • The only difference is that, if i is injured, then we must simultaneously update all w_{i\color{red}+1\color{black},A}, w_{i+2,A}, \dots. This avoids an infinite mutual injury loop between R_{i,A} and R_{i,B}. The construction ensures that R_{i,A} can only be injured by attempts to heal R_{0,B}, R_{1,B}, \dots, R_{i-1,B}, and R_{i,B} only by R_{0,A}, R_{1,A}, \dots, R_{i,A}.
Variants The same idea allows us to construct variants or stronger versions of the Friedberg–Muchnik theorem. We say that A \subset \N is
autoreducible for some Turing machine TM_i, iff for all k, • TM_i^{A \setminus\{k\}}(k) = 0 and k \not\in A, or • TM_i^{A \setminus\{k\}}(k) = 1 and k \in A. In other words, each k \in A question can be settled by TM_i that inquires an oracle that will answer all questions concerning j\in A, as long as j \neq k. We say A is not autoreducible iff it is not autoreducible for any TM_i. Then there exists an enumerable but not autoreducible set A. We construct it thus: • We ensure nonautoreducibility with an infinite list of requirements. Let R_{i} be the requirement that TM_i does not autoreduce A. • We ensure every requirement can only be injured finitely often, by well-ordering the requirements R_0 \prec R_1 \prec \cdots. • We ensure every requirement can be healed infinitely often, by assigning an infinite list of potential witnesses r_{i0} to each requirement R_i. We also ensure \{r_{i0} are mutually disjoint. The previous construction then works in the same way, because it is impossible for any requirement to cause self-injury. Given an infinite and enumerable set C, we can construct C \supset A_0 \supset A_1 \supset \cdots, such that A_0, A_1, \dots are enumerable, mutually irreducible, and every inclusion is
sparse. We construct it by generalizing the previous construction: • We ensure mutual irreducibility with an infinite list of requirements. Let R_{ijk} be the requirement that TM_i^{A_j} does not decide A_k. Here, j\neq k. • We ensure every requirement can only be injured finitely often, by well-ordering the requirements. • Any computable well-ordering will work. For example, we can use the Cantor zig-zag function f: \N^3 \to \N, then define R_{ijk} \prec R_{i'j'k'} \iff f(i,j,k) . • We ensure every requirement can be healed infinitely often by assigning, for each i, j, k such that j \neq k, an infinite list of potential witnesses r_{ijk0} that may eventually be inserted A_k to witness R_{ijk}. For each k, the lists \{r_{ijk0} are mutually disjoint. • We ensure C \supset A_0 \supset A_1 \supset \cdots sparsely by an iterative process of sparsification and partition: • Enumerate an infinite list of ascending elements c_0. This is possible since C is infinite and enumerable. • At stage
0, sparsify the list to c_0, c_2, c_4, c_8, \dots. Call this list L_0. Then partition L_0 into a doubly-infinite list of lists \{r_{ij00} . • At stage
1, sparsify L_0 into L_1. Then partition L_1 into a doubly-infinite list of lists \{r_{ij10} . • And so on. Combining the above two constructions, we can make all A_0, A_1, \dots non-autoreducible as well. == Friedberg–Muchnik theorem below
C ==