Chain rule of partial derivatives of composite functions Fundamental to automatic differentiation is the decomposition of differentials provided by the
chain rule of
partial derivatives of
composite functions. For the simple composition \begin{align} y &= f(g(h(x))) = f(g(h(w_0))) = f(g(w_1)) = f(w_2) = w_3 \\ w_0 &= x \\ w_1 &= h(w_0) \\ w_2 &= g(w_1) \\ w_3 &= f(w_2) = y \end{align} the chain rule gives \frac{\partial y}{\partial x} = \frac{\partial y}{\partial w_2} \frac{\partial w_2}{\partial w_1} \frac{\partial w_1}{\partial x} = \frac{\partial f(w_2)}{\partial w_2} \frac{\partial g(w_1)}{\partial w_1} \frac{\partial h(w_0)}{\partial x}
Two types of automatic differentiation Usually, two distinct modes of automatic differentiation are presented. •
forward accumulation (also called
bottom-up,
forward mode, or
tangent mode) •
reverse accumulation (also called
top-down,
reverse mode, or
adjoint mode) Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute \frac{\partial w_1}{\partial x} and then \frac{\partial w_2}{\partial w_1} and lastly \frac{\partial y}{\partial w_2}), while reverse accumulation traverses from outside to inside (first compute \frac{\partial y}{\partial w_2} and then \frac{\partial w_2}{\partial w_1} and lastly \frac{\partial w_1}{\partial x}). More succinctly, • Forward accumulation computes the recursive relation: \frac{\partial w_i}{\partial x} = \frac{\partial w_i}{\partial w_{i-1}} \frac{\partial w_{i-1}}{\partial x} \quad\text{with } w_3 = y, • Reverse accumulation computes the recursive relation: \frac{\partial y}{\partial w_i} = \frac{\partial y}{\partial w_{i+1}} \frac{\partial w_{i+1}}{\partial w_{i}} \quad\text{with } w_0 = x. The value of the partial derivative, called the
seed, is propagated forward or backward and is initially \frac{\partial x}{\partial x}=1 or \frac{\partial y}{\partial y}=1. Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable x_1,x_2,\dots,x_n a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one (\frac{\partial x_1}{\partial x_1}=1) and of all others to zero (\frac{\partial x_2}{\partial x_1}= \dots = \frac{\partial x_n}{\partial x_1} = 0). In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass. Which of these two types should be used depends on the sweep count. The
computational complexity of one sweep is proportional to the complexity of the original code. • Forward accumulation is more efficient than reverse accumulation for functions with as only sweeps are necessary, compared to sweeps for reverse accumulation. • Reverse accumulation is more efficient than forward accumulation for functions with as only sweeps are necessary, compared to sweeps for forward accumulation.
Backpropagation of errors in multilayer perceptrons, a technique used in
machine learning, is a special case of reverse accumulation.
Seppo Linnainmaa published reverse accumulation in 1976.
Forward accumulation In forward accumulation AD, one first fixes the
independent variable with respect to which differentiation is performed and computes the derivative of each sub-
expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the
inner functions in the chain rule: \begin{align} \frac{\partial y}{\partial x} &= \frac{\partial y}{\partial w_{n-1}} \frac{\partial w_{n-1}}{\partial x} \\[6pt] &= \frac{\partial y}{\partial w_{n-1}} \left(\frac{\partial w_{n-1}}{\partial w_{n-2}} \frac{\partial w_{n-2}}{\partial x}\right) \\[6pt] &= \frac{\partial y}{\partial w_{n-1}} \left(\frac{\partial w_{n-1}}{\partial w_{n-2}} \left(\frac{\partial w_{n-2}}{\partial w_{n-3}} \frac{\partial w_{n-3}}{\partial x}\right)\right) \\[6pt] &= \cdots \end{align} This can be generalized to multiple variables as a matrix product of
Jacobians. Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable w_i is augmented with its derivative \dot w_i (stored as a numerical value, not a symbolic expression), \dot w_i = \frac{\partial w_i}{\partial x} as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule. Using the chain rule, if w_i has predecessors in the computational graph: \dot w_i = \sum_{j \in \{\text{predecessors of }i\}} \frac{\partial w_i}{\partial w_j} \dot w_j As an example, consider the function: \begin{align} y &= f(x_1, x_2) \\ &= x_1 x_2 + \sin x_1 \\ &= w_1 w_2 + \sin w_1 \\ &= w_3 + w_4 \\ &= w_5 \end{align} For clarity, the individual sub-expressions have been labeled with the variables w_i. The choice of the independent variable to which differentiation is performed affects the
seed values and . Given interest in the derivative of this function with respect to , the seed values should be set to: \begin{align} \dot w_1 = \frac{\partial w_1}{\partial x_1} = \frac{\partial x_1}{\partial x_1} = 1 \\ \dot w_2 = \frac{\partial w_2}{\partial x_1} = \frac{\partial x_2}{\partial x_1} = 0 \end{align} With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph. : To compute the
gradient of this example function, which requires not only \frac{\partial y}{\partial x_1} but also \frac{\partial y}{\partial x_2}, an
additional sweep is performed over the computational graph using the seed values \dot w_1 = 0; \dot w_2 = 1.
Implementation Pseudocode Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression to be derived with regard to a variable . The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the
partial function as well as the partial derivative are evaluated. tuple evaluateAndDerive(Expression Z, Variable V) { if isVariable(Z) if (Z = V) return {valueOf(Z), 1}; else return {valueOf(Z), 0}; else if (Z = A + B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a + b, a' + b'}; else if (Z = A - B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a - b, a' - b'}; else if (Z = A * B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a * b, b * a' + a * b'}; }
C++ • include struct ValueAndPartial { float value, partial; }; struct Variable; struct Expression { virtual ValueAndPartial evaluateAndDerive(Variable &variable) = 0; }; struct Variable: public Expression { float value; Variable(float value): value(value) {} ValueAndPartial evaluateAndDerive(Variable &variable) { float partial = (this == &variable) ? 1.0f : 0.0f; return {value, partial}; } }; struct Plus: public Expression { Expression &a, &b; Plus(Expression &a, Expression &b): a(a), b(b) {} ValueAndPartial evaluateAndDerive(Variable &variable) { auto [valueA, partialA] = a.evaluateAndDerive(variable); auto [valueB, partialB] = b.evaluateAndDerive(variable); return {valueA + valueB, partialA + partialB}; } }; struct Multiply: public Expression { Expression &a, &b; Multiply(Expression &a, Expression &b): a(a), b(b) {} ValueAndPartial evaluateAndDerive(Variable &variable) { auto [valueA, partialA] = a.evaluateAndDerive(variable); auto [valueB, partialB] = b.evaluateAndDerive(variable); return {valueA * valueB, valueB * partialA + valueA * partialB}; } }; int main () { // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Variable x(2), y(3); Plus p1(x, y); Multiply m1(x, p1); Multiply m2(y, y); Plus z(m1, m2); float xPartial = z.evaluateAndDerive(x).partial; float yPartial = z.evaluateAndDerive(y).partial; std::cout
Reverse accumulation In reverse accumulation AD, the
dependent variable to be differentiated is fixed and the derivative is computed
with respect to each sub-
expression recursively. In a pen-and-paper calculation, the derivative of the
outer functions is repeatedly substituted in the chain rule: \begin{align} \frac{\partial y}{\partial x} &= \frac{\partial y}{\partial w_1} \frac{\partial w_1}{\partial x}\\[6px] &= \left(\frac{\partial y}{\partial w_2} \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}\\[6px] &= \left(\left(\frac{\partial y}{\partial w_3} \frac{\partial w_3}{\partial w_2}\right) \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}\\[6px] &= \cdots \end{align} In reverse accumulation, the quantity of interest is the
adjoint, denoted with a bar \bar w_i; it is a derivative of a chosen dependent variable with respect to a subexpression w_i: \bar w_i = \frac{\partial y}{\partial w_i} Using the chain rule, if w_i has successors in the computational graph: \bar w_i = \sum_{j \in \{\text{successors of }i\}} \bar w_j \frac{\partial w_j}{\partial w_i} Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only
half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables as well as the instructions that produced them in a
data structure known as a "tape" or a Wengert list (however, Wengert published forward accumulation, not reverse accumulation void derive(Expression Z, float seed) { if isVariable(Z) partialDerivativeOf(Z) += seed; else if (Z = A + B) derive(A, seed); derive(B, seed); else if (Z = A - B) derive(A, seed); derive(B, -seed); else if (Z = A * B) derive(A, valueOf(B) * seed); derive(B, valueOf(A) * seed); }
C++ • include struct Expression { float value; virtual void evaluate() = 0; virtual void derive(float seed) = 0; }; struct Variable: public Expression { float partial; Variable(float value) { this->value = value; partial = 0.0f; } void evaluate() {} void derive(float seed) { partial += seed; } }; struct Plus: public Expression { Expression &a, &b; Plus(Expression &a, Expression &b): a(a), b(b) {} void evaluate() { a.evaluate(); b.evaluate(); value = a.value + b.value; } void derive(float seed) { a.derive(seed); b.derive(seed); } }; struct Multiply: public Expression { Expression &a, &b; Multiply(Expression &a, Expression &b): a(a), b(b) {} void evaluate() { a.evaluate(); b.evaluate(); value = a.value * b.value; } void derive(float seed) { a.derive(b.value * seed); b.derive(a.value * seed); } }; int main () { // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Variable x(2), y(3); Plus p1(x, y); Multiply m1(x, p1); Multiply m2(y, y); Plus z(m1, m2); z.evaluate(); std::cout
Beyond forward and reverse accumulation Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of with a minimum number of arithmetic operations is known as the
optimal Jacobian accumulation (OJA) problem, which is
NP-complete. Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent. == Automatic differentiation using dual numbers ==