Self-application and recursion SII is an expression that takes an argument and applies that argument to itself: :
SIIα =
Iα(
Iα) =
αα This is also known as
U combinator,
Ux =
xx. One interesting property of it is that its self-application is irreducible: :
SII(
SII) =
I(
SII)(
I(
SII)) =
SII(
I(
SII)) =
SII(
SII) Or, using the equation
Ux =
xx as its definition directly, we immediately get
U U =
U U. Another thing is that it allows one to write a function that applies one thing to the self application of another thing: : (
S(
Kα)(
SII))
β =
Kαβ(
SIIβ) =
α(
Iβ(
Iβ)) =
α(
ββ) or it can be seen as defining yet another combinator directly,
Hxy =
x(
yy). This function can be used to achieve
recursion. If
β is the function that applies
α to the self application of something else, :
β =
Hα =
S(
Kα)(
SII) then the self-application of this
β is the fixed point of that
α: :
SIIβ =
ββ =
α(
ββ) =
α(
α(
ββ)) = \ldots Or, directly again from the derived definition,
Hα(
Hα) =
α(
Hα(
Hα)). If
α expresses a "computational step" computed by
αρν for some
ρ and
ν, that assumes
ρν′ expresses "the rest of the computation" (for some
ν′ that
α will "compute" from
ν), then its fixed point
ββ expresses the whole recursive computation, since using
the same function ββ for the "rest of computation" call (with
ββν =
α(
ββ)
ν) is the very definition of recursion:
ρν′ = ββν′ = α(ββ)ν′ = ... . The term
α will have to employ some kind of conditional to stop at some "base case" and not make the recursive call then, to avoid divergence. This can be formalized, with :
β =
Hα =
S(
Kα)(
SII) =
S(
KS)
Kα(
SII) =
S(
S(
KS)
K)(
K(
SII))
α as :
Yα =
SIIβ =
SII(
Hα) =
S(
K(
SII))
H α =
S(
K(
SII))(
S(
S(
KS)
K)(
K(
SII)))
α which gives us
one possible encoding of the
Y combinator. A shorter variation replaces its two leading subterms with just
SSI, since
Hα(
Hα) =
SHHα =
SSIHα. This becomes much shorter with the additional use of the
B,C,W combinators, as
the equivalent :
Yα =
S(
KU)(
SB(
KU))α =
U(
Bα
U) =
BU(
CBU)α =
SSI(
CBU)α And with a pseudo-
Haskell syntax it becomes the exceptionally short
Y =
U . (.
U). Following this approach, other fixpoint combinator definitions are possible. Thus, • This
Y, by
Haskell Curry: :
Hgx =
g(
xx) ;
Yg =
Hg(
Hg) ;
Y = S(KU)(SB(KU)) = SS(S(S(KS)K))(K(SII)) • Turing's
Θ: :
Hhg =
g(
hhg) ;
Θg =
HHg ;
Θ = U(B(SI)U) = SII(S(K(SI))(SII)) •
Y′ (with SK-encoding by
John Tromp): :
Hgh =
g(
hgh) ;
Y′g =
HgH ;
Y′ = WC(SB(C(WC))) = SSK(S(K(SS(S(SSK))))K) •
Θ4 by R. Statman: :
Hgyz =
g(
yyz) ;
Θ4g =
Hg(
Hg)(
Hg) ;
Θ4 = B(WW)(BW(BBB)) • or in general, :
Hsomething =
g(
hsomething) ;
YH
g =
H_____
H__
g : (where anything goes instead of "_") or any other intermediary
H combinator's definition, with its corresponding
YH definition to jump-start it correctly. In particular, one construction, due to Jan Klop, is :
Labcdefghijklmnopqstuvwxyzr =
r(
thisisafixedpointcombinator) ;
YK =
LLLLLLLLLLLLLLLLLLLLLLLLLL In a
strict programming language the
Y combinator will expand until
stack overflow, or never halt in case of
tail call optimization. The
Z combinator will work in
strict languages (also called eager languages), where the applicative evaluation order is in effect. Its difference from the
Y combinator is that it is using Qx = B(Ux)I = S(K(Ux))I = S (BS(BKU)) (KI) x as the \eta-expanded form of
Y's plain Ux. Whereas
Y =
BU(
CBU), we have
Z =
BU(
CBQ) =
S(
KU)(
SB(
KQ)): : \begin{align} \\ Z & = \lambda f. (\lambda x. f (\lambda v. x x v))(\lambda x. f (\lambda v. x x v)) \\ & = \lambda f. U (\lambda x. f (\lambda v. U x v)) \\ & = S(\lambda f. U)(\lambda f. \lambda x. f (\lambda v. U x v)) \\ & = S(KU)(\lambda f. S(\lambda x. f)(\lambda x. \lambda v. U x v)) \\ & = S(KU)(\lambda f. S(K f)(\lambda x. \lambda v. U x v)) \\ & = S(KU)(S(\lambda f. S(K f))(\lambda f. \lambda x. \lambda v. U x v)) \\ & = S(KU)(S(S(\lambda f. S)(\lambda f. K f))(K(\lambda x. \lambda v. U x v))) \\ & = S(KU)(S(S(KS)K)(K(\lambda x. \lambda v. U x v))) \\ & = S(KU)(S(S(KS)K)(K(\lambda x. S({\color{Red}\lambda v. U x})(\lambda v. v)))) \\ & = S(KU)(S(S(KS)K)(K(\lambda x. S(K(U x))I))) \\ & = S(KU)(S(S(KS)K)(K(S(\lambda x. S(K(U x)))(\lambda x. I)))) \\ & = S(KU)(S(S(KS)K)(K(S(\lambda x. S(K(U x)))(KI)))) \\ & = S(KU)(S(S(KS)K)(K(S(S(\lambda x. S)(\lambda x. K(U x)))(KI)))) \\ & = S(KU)(S(S(KS)K)(K(S(S(KS)(S(\lambda x. K)(\lambda x. U x)))(KI)))) \\ & = S(KU)(S(S(KS)K)(K(S(S(KS)(S(KK)U))(KI)))) \\ \end{align}
The reversal expression S(
K(
SI))
K reverses the two terms following it: :
S(
K(
SI))
Kαβ → :
K(
SI)α(
Kα)β → :
SI(
Kα)β → :
Iβ(
Kαβ) → :
Iβα → : βα It is thus equivalent to
CI. And in general,
S(
K(
Sf))
K is equivalent to
Cf, for any
f.
Boolean logic SKI combinator calculus can also implement
Boolean logic in the form of an
if-then-else structure. An
if-then-else structure consists of a
Boolean expression that is either true (
T) or false (
F) and two arguments, such that: :
Txy =
x and :
Fxy =
y The key is in defining the two Boolean expressions. The first works just like one of our basic combinators: :
T =
K :
Kxy =
x The second is also fairly simple: :
F =
SK :
SKxy =
Ky(
xy) =
y Once true and false are defined, all Boolean logic can be implemented in terms of Booleans acting as
if-then-else structures. Boolean
NOT (which returns the opposite of a given Boolean) works the same as the
if-then-else structure, with
F and
T as the second and third values: :
NOT b =
bFT =
S(
SI(
KF))(
KT)
b If this is put in an
if-then-else structure, it has the expected result: :
NOT(
T) = (
T)
FT =
F :
NOT(
F) = (
F)
FT =
T Boolean
OR (which returns
T if either of its two Boolean argument values is
T) works the same as an
if-then-else structure with
T as the second value: :
OR ab =
aTb =
SI(
KT)
ab If this is put in an
if-then-else structure, it has the expected result: :
OR(
T)(
T) = (
T)
T(
T) =
T :
OR(
T)(
F) = (
T)
T(
F) =
T :
OR(
F)(
T) = (
F)
T(
T) =
T :
OR(
F)(
F) = (
F)
T(
F) =
F Boolean
AND (which returns
T if both its Boolean argument values are
T) works the same as an
if-then-else structure with
F as the third value: :
AND ab =
abF =
Sa(
KF)
b =
SS(
K(
KF))
ab If this is put in an
if-then-else structure, it has the expected result: :
AND(
T)(
T) = (
T)(
T)
F =
T :
AND(
T)(
F) = (
T)(
F)
F =
F :
AND(
F)(
T) = (
F)(
T)
F =
F :
AND(
F)(
F) = (
F)(
F)
F =
F This proves that the SKI system can fully express Boolean logic. SKI calculus is
complete, so any other logical combinator can be expressed in it as well, like, e.g., :
XOR ab =
OR (
AND a (
NOT b)) (
AND (
NOT a)
b) so that :
XOR abxy = (
a(
bFT)
F)
T((
aFT)
bF)
xy =
a(
byx)(
bxy) as well as :
NOT bxy =
bFTxy =
b(
Fxy)(
Txy) =
byx :
OR abxy =
aTbxy =
a(
Txy)(
bxy) =
ax(bxy) :
AND abxy =
abFxy =
a(
bxy)(
Fxy) =
a(bxy)y ==Connection to intuitionistic logic==