Piecewise-linear variants Leaky ReLU (2014) allows a small, positive gradient when the unit is inactive, : f(x) = \begin{cases} x & x > 0, \\ \alpha x & x \le 0, \end{cases} \qquad f'(x) = \begin{cases} 1 & x > 0, \\ \alpha & x \le 0. \end{cases} The same function can also be expressed without the piecewise notation as: : f(x) = \frac{1+\alpha}{2} x+\frac{1-\alpha}{2} |x|
Parametric ReLU (PReLU, 2016) takes this idea further by making \alpha a learnable parameter along with the other network parameters. Note that for \alpha \le 1, this is equivalent to : f(x) = \max(x, \alpha x) and thus has a relation to "maxout" networks. : f(x) = [\operatorname{ReLU}(x), \operatorname{ReLU}(-x)].
Smooth variants Softplus A smooth approximation to the rectifier is the
analytic function : f(x) = \ln(1 + e^x),\qquad f'(x) = \frac{e^{x}}{1 + e^{x}} = \frac{1}{1 + e^{-x}} which is called the
softplus (2000) For large negative x it is roughly \ln 1, so just above 0, while for large positive x it is roughly \ln(e^x), so just above x. This function can be approximated as: : \ln\left(1 + e^x \right) \approx \begin{cases} \ln2, & x=0,\\[6pt] \frac x {1-e^{-x/\ln2}}, & x\neq 0 \end{cases} By making the change of variables x = y\ln(2), this is equivalent to : \log_2(1 + 2^y) \approx \begin{cases} 1,& y=0,\\[6pt] \frac{y}{1-e^{-y}}, & y\neq 0\end{cases} A sharpness parameter k may be included: : f(x) = \frac{\ln(1 + e^{kx})} k, \qquad f'(x) = \frac{e^{kx}}{1 + e^{kx}} = \frac{1}{1 + e^{-kx}} The derivative of softplus is the
logistic function. This in turn can be viewed as a smooth approximation of the derivative of the rectifier, the
Heaviside step function. The multivariable generalization of single-variable softplus is the
LogSumExp with the first argument set to zero: : \operatorname{LSE_0}^+(x_1, \dots, x_n) := \operatorname{LSE}(0, x_1, \dots, x_n) = \ln(1 + e^{x_1} + \cdots + e^{x_n}) The LogSumExp function is : \operatorname{LSE}(x_1, \dots, x_n) = \ln(e^{x_1} + \cdots + e^{x_n}) and its gradient is the
softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.
ELU Exponential linear units (2015) smoothly allow negative values. This is an attempt to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs. : f(x) = \begin{cases} x & x > 0, \\ \alpha \left(e^x - 1\right) & x \le 0 \end{cases} \qquad f'(x) = \begin{cases} 1 & x > 0, \\ \alpha e^x & x \le 0 \end{cases} In these formulas, \alpha is a
hyperparameter to be tuned with the constraint \alpha \geq 0. Given the same interpretation of \alpha, ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the form f(x) = \max(- \alpha, x).
Gaussian-error linear unit (GELU) GELU (2016) is a smooth approximation to the rectifier: : f(x) = x \Phi(x) = x\cdot\frac{1}{2}\left[1+\operatorname{erf}\left(x/\sqrt{2}\right)\right], : f'(x) = x \Phi'(x) + \Phi(x) where \Phi(x) = P(X \leqslant x) is the
cumulative distribution function of the standard
normal distribution and \operatorname{erf}(z) is the
error function. This activation function is illustrated in the figure at the start of this article. It has a "bump" with negative derivative to the left of
x \Phi(x): : f(x)\approx \frac{1}{2}x\left(1+\tanh\left[\sqrt{2/\pi}\left(x+0.044715x^3\right)\right]\right) The second, less-precise approximation uses the
sigmoid (logistic) function as f(x)\approx x\cdot\operatorname{sigmoid}(1.702x), whose formula resembles SiLU (see below).
SiLU The SiLU (sigmoid linear unit) or
swish function It is defined as : f(x) = x \tanh\big(\operatorname{softplus}(x)\big), where \tanh(x) is the
hyperbolic tangent, and \operatorname{softplus}(x) is the
softplus function. Mish was obtained by experimenting with functions similar to Swish (SiLU, see above). It is non-monotonic (has a "bump") like Swish. The main new feature is that it exhibits a "self-regularizing" behavior attributed to a term in its first derivative.
Squareplus Squareplus (2021) is the function :f(x) = \frac{x + \sqrt{x^2 + b}}{2} where b \geq 0 is a hyperparameter that determines the "size" of the curved region near x = 0. (For example, letting b = 0 yields ReLU, and letting b = 4 yields the
metallic mean function.) Squareplus shares many properties with softplus: It is
monotonic, strictly
positive, approaches 0 as x \to -\infty, approaches the identity as x \to +\infty, and is C^\infty
smooth. However, squareplus can be computed using only
algebraic functions, making it well-suited for settings where computational resources or instruction sets are limited. Additionally, squareplus requires no special consideration to ensure numerical stability when x is large.
DELU ExtendeD Exponential Linear Unit (DELU, 2023) is an activation function which is smoother within the neighborhood of zero and sharper for bigger values, allowing better allocation of neurons in the learning process for higher performance. Thanks to its unique design, it has been shown that DELU may obtain higher classification accuracy than ReLU and ELU. : f(x) = \begin{cases} x & x > x_c, \\ (e^{ax} - 1)/b & x \le x_c \end{cases} \qquad f'(x) = \begin{cases} 1 & x > x_c, \\ (a / b) e^{ax} & x \le x_c \end{cases} In these formulas, a, b and x_c are
hyperparameter values which could be set as default constraints a = 1, b = 2 and x_c = 1.25643, as done in the original work. ==See also==