MarketError function
Company Profile

Error function

In mathematics, the error function, often denoted by erf, is a function  :\mathbb {C} \to \mathbb {C} } defined as:

Name
The name "error function" and its abbreviation were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of probability, and notably the theory of errors". The error function complement was also discussed by Glaisher in a separate publication in the same year. For the "law of facility" of errors whose density is given by f(x) = \left(\frac{c}{\pi}\right)^{1/2} e^{-c x^2} (the normal distribution), Glaisher calculates the probability of an error lying between and as \left(\frac{c}{\pi}\right)^\frac{1}{2} \int_p^qe^{-cx^2}\,dx = \frac{1}{2} \big(\operatorname{erf}(q\sqrt{c}) - \operatorname{erf}(p\sqrt{c})\big). ==Applications==
Applications
When the results of a series of measurements are described by a normal distribution with standard deviation and expected value 0, then is the probability that the error of a single measurement lies between and , for positive . This is useful, for example, in determining the bit error rate of a digital communication system. The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function. The error function and its approximations can be used to estimate results that hold with high probability or with low probability. Given a random variable (a normal distribution with mean and standard deviation ) and a constant , it can be shown via integration by substitution: \begin{align} \Pr[X\leq L] &= \frac{1}{2} + \frac{1}{2} \operatorname{erf}\left(\frac{L-\mu}{\sqrt{2}\sigma}\right) \\ &\approx A \exp \left(-B \left(\frac{L-\mu}{\sigma}\right)^2\right) \end{align} where and are certain numeric constants. If is sufficiently far from the mean, specifically , then: \Pr[X\leq L] \leq A \exp (-B \ln(k)) = \frac{A}{k^B} so the probability goes to 0 as . The probability for being in the interval can be derived as \begin{align} \Pr[L_a\leq X \leq L_b] &= \int_{L_a}^{L_b} \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) \, dx \\ &= \frac{1}{2}\left(\operatorname{erf}\left(\frac{L_b-\mu}{\sqrt{2}\sigma}\right) - \operatorname{erf}\left(\frac{L_a-\mu}{\sqrt{2}\sigma}\right)\right).\end{align} ==Properties==
Properties
The property means that the error function is an odd function. This directly results from the fact that the integrand is an even function (the antiderivative of an even function which is zero at the origin is an odd function and vice versa). Since the error function is an entire function which maps real numbers to real numbers, for any complex number : \operatorname{erf}(\overline{z}) = \overline{\operatorname{erf}(z)} where \overline{z} denotes the complex conjugate of z. The integrand and are shown in the complex -plane in the figures at right with domain coloring. The error function at is exactly 1 (see Gaussian integral). At the real axis, approaches unity at and −1 at . At the imaginary axis, it tends to . Taylor series The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges. For , however, cancellation of leading terms makes the Taylor expansion impractical. The defining integral cannot be evaluated in closed form in terms of elementary functions (see Liouville's theorem), but by expanding the integrand into its Maclaurin series, integrating term by term, and using the fact that \operatorname{erf}(0)=0, one obtains the error function's Maclaurin series as: \begin{align} \operatorname{erf}(z) &= \frac{2}{\sqrt\pi}\sum_{n=0}^\infty\frac{(-1)^n z^{2n+1}}{n! (2n+1)} \\[6pt] &= \frac{2}{\sqrt\pi} \left(z-\frac{z^3}{3}+\frac{z^5}{10}-\frac{z^7}{42}+\frac{z^9}{216}-\cdots\right) \end{align} which holds for every complex number . The denominator terms are sequence A007680 in the OEIS. It is a special case of Kummer's function: \operatorname{erf}(z) = \frac{2z}{\sqrt\pi}\,{}_1F_1(1/2;3/2;-z^2). For iterative calculation of the above series, the following alternative formulation may be useful: \begin{align} \operatorname{erf}(z) &= \frac{2}{\sqrt\pi}\sum_{n=0}^\infty\left(z \prod_{k=1}^n {\frac{-(2k-1) z^2}{k (2k+1)}}\right) \\[6pt] &= \frac{2}{\sqrt\pi} \sum_{n=0}^\infty \frac{z}{2n+1} \prod_{k=1}^n \frac{-z^2}{k} \end{align} because expresses the multiplier to turn the th term into the th term (considering as the first term). The imaginary error function has a very similar Maclaurin series, which is: \begin{align} \operatorname{erfi}(z) &= \frac{2}{\sqrt\pi}\sum_{n=0}^\infty\frac{z^{2n+1}}{n! (2n+1)} \\[6pt] &=\frac{2}{\sqrt\pi} \left(z+\frac{z^3}{3}+\frac{z^5}{10}+\frac{z^7}{42}+\frac{z^9}{216}+\cdots\right) \end{align} which holds for every complex number . Derivative and integral The derivative of the error function follows immediately from its definition: \frac{d}{dz}\operatorname{erf}(z) =\frac{2}{\sqrt\pi} e^{-z^2}. From this, the derivative of the imaginary error function is also immediate: \frac{d}{dz}\operatorname{erfi}(z) =\frac{2}{\sqrt\pi} e^{z^2}.Higher order derivatives are given by \operatorname{erf}^{(k)}(z) = \frac{2 (-1)^{k-1}}{\sqrt\pi} \mathit{H}_{k-1}(z) e^{-z^2} = \frac{2}{\sqrt\pi} \frac{d^{k-1}}{dz^{k-1}} \left(e^{-z^2}\right),\qquad k=1, 2, \dots where are the physicists' Hermite polynomials. An antiderivative of the error function, obtainable by integration by parts, is \int \operatorname{erf}(z) dz = z\operatorname{erf}(z) + \frac{e^{-z^2}}{\sqrt\pi}+C. An antiderivative of the imaginary error function, also obtainable by integration by parts, is \int \operatorname{erfi}(z) dz = z\operatorname{erfi}(z) - \frac{e^{z^2}}{\sqrt\pi}+C. Bürmann series An expansion which converges more rapidly for all real values of than a Taylor expansion is obtained by using Hans Heinrich Bürmann's theorem: \begin{align} \operatorname{erf}(x) &= \frac{2}{\sqrt\pi} \sgn(x) \cdot \sqrt{1-e^{-x^2}} \left( 1-\frac{1}{12} \left (1-e^{-x^2} \right ) -\frac{7}{480} \left (1-e^{-x^2} \right )^2 -\frac{5}{896} \left (1-e^{-x^2} \right )^3-\frac{787}{276 480} \left (1-e^{-x^2} \right )^4 - \cdots \right) \\[10pt] &= \frac{2}{\sqrt\pi} \sgn(x) \cdot \sqrt{1-e^{-x^2}} \left(\frac{\sqrt\pi}{2} + \sum_{k=1}^\infty c_k e^{-kx^2} \right) \end{align} where is the sign function. By keeping only the first two coefficients and choosing and , the resulting approximation shows its largest relative error at , where it is less than 0.0034361: \operatorname{erf}(x) \approx \frac{2}{\sqrt\pi}\sgn(x) \cdot \sqrt{1-e^{-x^2}} \left(\frac{\sqrt{\pi}}{2} + \frac{31}{200}e^{-x^2}-\frac{341}{8000} e^{-2x^2}\right). Inverse functions Given a complex number , there is not a unique complex number satisfying , so a true inverse function would be multivalued. However, for , there is a unique real number denoted satisfying \operatorname{erf}\left(\operatorname{erf}^{-1}(x)\right) = x. The inverse error function is usually defined with domain , and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk of the complex plane, using the Maclaurin series \operatorname{erf}^{-1}(z)=\sum_{k=0}^\infty\frac{c_k}{2k+1}\left (\frac{\sqrt\pi}{2}z\right )^{2k+1}, where and \begin{align} c_k & =\sum_{m=0}^{k-1}\frac{c_m c_{k-1-m}}{(m+1)(2m+1)} \\[1ex] &= \left\{1,1,\frac{7}{6},\frac{127}{90},\frac{4369}{2520},\frac{34807}{16200},\ldots\right\}. \end{align} So we have the series expansion (common factors have been canceled from numerators and denominators): \operatorname{erf}^{-1}(z) = \frac{\sqrt{\pi}}{2} \left (z + \frac{\pi}{12}z^3 + \frac{7\pi^2}{480}z^5 + \frac{127\pi^3}{40320}z^7 + \frac{4369\pi^4}{5806080} z^9 + \frac{34807\pi^5}{182476800}z^{11} + \cdots\right ). (After cancellation the numerator and denominator values in and respectively; without cancellation the numerator terms are values in .) The error function's value at  is equal to . For , we have . The inverse complementary error function is defined as \operatorname{erfc}^{-1}(1-z) = \operatorname{erf}^{-1}(z). For real , there is a unique real number satisfying . The inverse imaginary error function is defined as . For any real x, Newton's method can be used to compute , and for , the following Maclaurin series converges: \operatorname{erfi}^{-1}(z) =\sum_{k=0}^\infty\frac{(-1)^k c_k}{2k+1} \left( \frac{\sqrt\pi}{2} z \right)^{2k+1}, where is defined as above. Asymptotic expansion A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real is \begin{align} \operatorname{erfc}(x) &= \frac{e^{-x^2}}{x\sqrt{\pi}}\left(1 + \sum_{n=1}^\infty (-1)^n \frac{1\cdot3\cdot5\cdots(2n - 1)}{\left(2x^2\right)^n}\right) \\[6pt] &= \frac{e^{-x^2}}{x\sqrt{\pi}}\sum_{n=0}^\infty (-1)^n \frac{(2n - 1)!!}{\left(2x^2\right)^n}, \end{align} where is the double factorial of , which is the product of all odd numbers up to . This series diverges for every finite , and its meaning as asymptotic expansion is that for any integer one has \operatorname{erfc}(x) = \frac{e^{-x^2}}{x\sqrt{\pi}}\sum_{n=0}^{N-1} (-1)^n \frac{(2n - 1)!!}{\left(2x^2\right)^n} + R_N(x) where the remainder is R_N(x) := \frac{(-1)^N \, (2 N - 1)!!}{\sqrt{\pi} \cdot 2^{N - 1}} \int_x^\infty t^{-2N}e^{-t^2}\, dt, which follows easily by induction, writing e^{-t^2} = -\frac{1}{2 t} \, \frac{d}{dt} e^{-t^2} and integrating by parts. The asymptotic behavior of the remainder term, in Landau notation, is R_N(x) = O\left(x^{- (1 + 2N)} e^{-x^2}\right) as . This can be found by R_N(x) \propto \int_x^\infty t^{-2N}e^{-t^2}\, dt = e^{-x^2} \int_0^\infty (t+x)^{-2N}e^{-t^2-2tx}\,dt\leq e^{-x^2} \int_0^\infty x^{-2N} e^{-2tx}\,dt \propto x^{-(1+2N)}e^{-x^2}. For large enough values of , only the first few terms of this asymptotic expansion are needed to obtain a good approximation of (while for not too large values of , the above Taylor expansion at 0 provides a very fast convergence). Continued fraction expansion A continued fraction expansion of the complementary error function was found by Laplace: \operatorname{erfc}(z) = \frac{z}{\sqrt\pi}e^{-z^2} \cfrac{1}{z^2+ \cfrac{a_1}{1+\cfrac{a_2}{z^2+ \cfrac{a_3}{1+\dotsb}}}},\qquad a_m = \frac{m}{2}. Factorial series The inverse factorial series: \begin{align} \operatorname{erfc}(z) &= \frac{e^{-z^2}}{\sqrt{\pi}\,z} \sum_{n=0}^\infty \frac{\left(-1\right)^n Q_n}{{\left(z^2+1\right)}^{\bar{n}}} \\[1ex] &= \frac{e^{-z^2}}{\sqrt{\pi}\,z} \left[1 -\frac{1}{2}\frac{1}{(z^2+1)} + \frac{1}{4}\frac{1}{\left(z^2+1\right) \left(z^2+2\right)} - \cdots \right] \end{align} converges for . Here \begin{align} Q_n &\overset{\text{def}}{{}={}} \frac{1}{\Gamma{\left(\frac{1}{2}\right)}} \int_0^\infty \tau(\tau-1)\cdots(\tau-n+1)\tau^{-\frac{1}{2}} e^{-\tau} \,d\tau \\[1ex] &= \sum_{k=0}^n \frac{s(n,k)}{2^{\bar{k}}}, \end{align} denotes the rising factorial, and denotes a signed Stirling number of the first kind. The Taylor series can be written in terms of the double factorial: \operatorname{erf}(z) = \frac{2}{\sqrt\pi} \sum_{n=0}^\infty \frac{(-2)^n(2n-1)!!}{(2n+1)!}z^{2n+1} == Bounds and numerical approximations ==
Bounds and numerical approximations
Approximation with elementary functions Abramowitz and Stegun give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are: \operatorname{erf}(x) \approx 1 - \frac{1}{\left(1 + a_1x + a_2x^2 + a_3x^3 + a_4x^4\right)^4}, \qquad x \geq 0 (maximum error: ) where , , , \operatorname{erf}(x) \approx 1 - \left(a_1t + a_2t^2 + a_3t^3\right)e^{-x^2},\quad t=\frac{1}{1 + px}, \qquad x \geq 0 (maximum error: ) where , , , \operatorname{erf}(x) \approx 1 - \frac{1}{\left(1 + a_1x + a_2x^2 + \cdots + a_6x^6\right)^{16}}, \qquad x \geq 0 (maximum error: ) where , , , , , \operatorname{erf}(x) \approx 1 - \left(a_1t + a_2t^2 + \cdots + a_5t^5\right)e^{-x^2},\quad t = \frac{1}{1 + px} (maximum error: ) where , , , , , One can improve the accuracy of the A&S approximation by extending it with three extra parameters, \operatorname{erf}(x) \approx 1 - \left(a_1t + a_2t^2 + \cdots + a_5t^5+a_6t^6+a_7t^7\right)e^{-x^2},\quad t = \frac{1}{1 + p_1x+p_2x^2} where p1 = 0.406742016006509, p2 = 0.0072279182302319, a1 = 0.316879890481381, a2 = -0.138329314150635, a3 = 1.08680830347054, a4 = -1.11694155120396, a5 = 1.20644903073232, a6 = -0.393127715207728, a7 = 0.0382613542530727. The maximum error of this approximation is about . The parameters are obtained by fitting the extended approximation to the accurate values of the error function using the following Python code. import numpy as np from math import erf, exp, sqrt from scipy.optimize import least_squares • • Extended A&S approximation: • erf(x) ≈ 1 − t * exp(−x^2) * (a1 + a2*t + a3*t^2 + ... + a7*t^6) • where now • t = 1 / (1 + p1*x + p2*x^2) • We fit parameters p1, p2, a1..a7 over x in [0, 10]. • def approx_erf(params, x): p1 = params[0] p2 = params[1] a = params[2:] t = 1.0 / (1.0 + p1 * x + p2 * x * x) poly = np.zeros_like(x) tt = np.ones_like(x) # t^0 # polynomial: a1*t^0 + a2*t^1 + ... + a7*t^6 for ak in a: poly += ak * tt tt *= t return 1.0 - t * np.exp(-x * x) * poly def residuals(params, xs, ys): return approx_erf(params, xs) - ys • • Prepare data for fitting • N = 300 xmin = 0 xmax = 10 xs = np.linspace(xmin, xmax, N) ys = np.array([erf(x) for x in xs], dtype=float) • • Initial guess for parameters • Start from original A&S values and extend them conservatively • p1_0 = 0.3275911 # original A&S p p2_0 = 0.0 # new denominator parameter • original A&S 5 coefficients, add two => 7 in total a0 = [ 0.254829592, -0.284496736, 1.421413741, -1.453152027, 1.061405429, 0.0, # new term 0.0, # another new term ] params0 = np.array([p1_0, p2_0] + a0, dtype=float) • • Fit using nonlinear least squares (Levenberg–Marquardt) • result = least_squares( residuals, params0, args=(xs, ys), xtol=1e-14, ftol=1e-14, gtol=1e-14, max_nfev=5000 ) params = result.x p1_fit = params[0] p2_fit = params[1] a_fit = params[2:] • • Print fitted parameters • print("\nFitted parameters:") print(f"p1 = {p1_fit:.15g},") print(f"p2 = {p2_fit:.15g},") for i, ai in enumerate(a_fit, 1): print(f"a{i} = {ai:.15g},") • • Evaluate approximation error • approx_vals = approx_erf(params, xs) abs_err = np.abs(approx_vals - ys) print(f"\nMaximum absolute error on [{xmin},{xmax}]:", np.max(abs_err)) print("RMS error:", np.sqrt(np.mean(abs_err**2))) print("Done.") All of these approximations are valid for . To use these approximations for negative , use the fact that is an odd function, so . Exponential bounds and a pure exponential approximation for the complementary error function are given by \begin{align} \operatorname{erfc}(x) &\leq \frac{1}{2}e^{-2 x^2} + \frac{1}{2}e^{- x^2} \leq e^{-x^2}, &&x > 0 \\[1.5ex] \operatorname{erfc}(x) &\approx \frac{1}{6}e^{-x^2} + \frac{1}{2}e^{-\frac{4}{3} x^2}, &&x > 0 . \end{align} The above have been generalized to sums of exponentials with increasing accuracy in terms of so that can be accurately approximated or bounded by , where \tilde{Q}(x) = \sum_{n=1}^N a_n e^{-b_n x^2}. In particular, there is a systematic methodology to solve the numerical coefficients {{math|{(an,bn)}}} that yield a minimax approximation or bound for the closely related Q-function: , , or for . The coefficients {{math|{(an,bn)}}} for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset. A tight approximation of the complementary error function for is given by Karagiannidis & Lioumpas (2007), who showed for the appropriate choice of parameters {{math|{A,B}}} that \operatorname{erfc}(x) \approx \frac{\left(1 - e^{-Ax}\right)e^{-x^2}}{B\sqrt{\pi} x}. They determined {{math|{A,B} {1.98,1.135}}}, which gave a good approximation for all . Alternative coefficients are also available for tailoring accuracy for a specific application or transforming the expression into a tight bound. A single-term lower bound is \operatorname{erfc}(x) \geq \sqrt{\frac{2 e}{\pi}} \frac{\sqrt{\beta - 1}}{\beta} e^{- \beta x^2}, \qquad x \ge 0,\quad \beta > 1, where the parameter can be picked to minimize error on the desired interval of approximation. Another approximation is given by Sergei Winitzki using his "global Padé approximations": \operatorname{erf}(x) \approx \sgn x \cdot \sqrt{1 - \exp\left(-x^2\frac{\frac{4}{\pi} + ax^2}{1 + ax^2}\right)} where a = \frac{8(\pi - 3)}{3\pi(4 - \pi)} \approx 0.140012. This is designed to be very accurate in the neighborhoods of 0 and infinity, and the relative error is less than 0.00035 for all real . Using the alternate value reduces the maximum relative error to about 0.00013. The extended "global Pade" approximation, \operatorname{erf}(x) \approx \sgn x \cdot \sqrt{1 - \exp\left(-x^2 \frac{ 4+0.880877880079853x^2+0.144026670907584x^4+0.0077581300270021x^6 }{ \pi+0.786235558186528x^2+0.128368576906837x^4+0.00773380006014367x^6} \right)}\,, provides a maximum error of about , as demonstrated by the following Python script. import numpy,math from scipy.optimize import least_squares • approximation to erf(x) def approx_erf(p,x): frac=(4+p[0]*x**2+p[1]*x**4+p[2]*x**6)/( math.pi+p[3]*x**2+p[4]*x**4+p[5]*x**6) return numpy.sign(x)*numpy.sqrt( 1-numpy.exp(-x*x*frac)) def residuals(params, xs, ys): return approx_erf(params, xs) - ys • data for fitting N = 200 xmin = 0 xmax = 9 xs = numpy.linspace(xmin, xmax, N) ys = numpy.array([math.erf(x) for x in xs], dtype=float) params0 = numpy.array([0.9,0.1,0.008,0.8,0.1,0.008], dtype=float) • fitting result = least_squares( residuals, params0, args=(xs, ys), xtol=1e-14, ftol=1e-14, gtol=1e-14, max_nfev=5000 ) params = result.x • print out fitted parameters print("\nFitted parameters:") for i, pi in enumerate(params, 0): print(f"p{i} = {pi:.15g},") • evaluate approximation error approx_vals = approx_erf(params, xs) abs_err = numpy.abs(approx_vals - ys) print(f"\nMaximum absolute error on [{xmin},{xmax}]:", numpy.max(abs_err)) print("RMS error:", numpy.sqrt(numpy.mean(abs_err**2))) print("Done.") Winitzki's approximation can be inverted to obtain an approximation for the inverse error function: \operatorname{erf}^{-1}(x) \approx \sgn x \cdot \sqrt{\sqrt{\left(\frac{2}{\pi a} + \frac{\ln\left(1 - x^2\right)}{2}\right)^2 - \frac{\ln\left(1 - x^2\right)}{a}} -\left(\frac{2}{\pi a} + \frac{\ln\left(1 - x^2\right)}{2}\right)}. An approximation with a maximal error of for any real argument is: \begin{align} \operatorname{erf}(x) &= \begin{cases} 1-\tau, & x\ge 0\\ \tau-1, & x An approximation of \operatorname{erfc} with a maximum relative error less than 2^{-53} \left(\approx 1.1 \times 10^{-16}\right) in absolute value is: for \begin{aligned} \operatorname{erfc} \left(x\right) & = \left(\frac{0.56418958354775629}{x+2.06955023132914151}\right) \left(\frac{x^2+2.71078540045147805 x+5.80755613130301624}{x^2+3.47954057099518960 x+12.06166887286239555}\right) \\ & \left(\frac{x^2+3.47469513777439592 x+12.07402036406381411}{x^2+3.72068443960225092 x+8.44319781003968454}\right) \left(\frac{x^2+4.00561509202259545 x+9.30596659485887898}{x^2+3.90225704029924078 x+6.36161630953880464}\right) \\ & \left(\frac{x^2+5.16722705817812584 x+9.12661617673673262}{x^2+4.03296893109262491 x+5.13578530585681539}\right) \left(\frac{x^2+5.95908795446633271 x+9.19435612886969243}{x^2+4.11240942957450885 x+4.48640329523408675}\right) e^{-x^2} \\ \end{aligned} and for x \operatorname{erfc} \left(x\right) = 2 - \operatorname{erfc} \left(-x\right) A simple approximation for real-valued arguments can be done through hyperbolic functions: \operatorname{erf} \left(x\right) \approx z(x) = \tanh\left(\frac{2}{\sqrt{\pi}}\left(x+\frac{11}{123}x^3\right)\right) which keeps the absolute difference {{nowrap|\left|\operatorname{erf} \left(x\right)-z(x)\right| .}} Since the error function and the Gaussian Q-function are closely related through the identity \operatorname{erfc}(x) = 2 Q(\sqrt{2} x) or equivalently Q(x) = \frac{1}{2} \operatorname{erfc}\left(\frac{x}{\sqrt{2}}\right), bounds developed for the Q-function can be adapted to approximate the complementary error function. A pair of tight lower and upper bounds on the Gaussian Q-function for positive arguments x \in [0, \infty) was introduced by Abreu (2012) based on a simple algebraic expression with only two exponential terms: \begin{align} x &\geq 0\\ \frac{1}{2} \operatorname{erfc}\left(\frac{x}{\sqrt{2}}\right) &\geq \frac{1}{12} e^{-x^2} + \frac{1}{\sqrt{2\pi} (x + 1)} e^{-x^2 / 2}\\ &\leq \frac{1}{50} e^{-x^2} + \frac{1}{2 (x + 1)} e^{-x^2 / 2}\\ \frac{1}{25} e^{-2x^2} + \frac{1}{x + 1} e^{-x^2} \geq \operatorname{erfc}(x) &\geq \frac{1}{6} e^{-2x^2} + \frac{1}{2\sqrt{2\pi} (x + 1)} e^{-x^2} \end{align} These bounds stem from a unified form Q_{\mathrm{B}}(x; a, b) = \frac{\exp(-x^2)}{a} + \frac{\exp(-x^2 / 2)}{b (x + 1)}, where the parameters a and b are selected to ensure the bounding properties: for the lower bound, a_{\mathrm{L}} = 12 and b_{\mathrm{L}} = \sqrt{2\pi}, and for the upper bound, a_{\mathrm{U}} = 50 and b_{\mathrm{U}} = 2. These expressions maintain simplicity and tightness, providing a practical trade-off between accuracy and ease of computation. They are particularly valuable in theoretical contexts, such as communication theory over fading channels, where both functions frequently appear. Additionally, the original Q-function bounds can be extended to Q^n(x) for positive integers n via the binomial theorem, suggesting potential adaptability for powers of \operatorname{erfc}(x), though this is less commonly required in error function applications. Table of values ==Related functions==
Related functions
Complementary error function The complementary error function, denoted , is defined as \begin{align} \operatorname{erfc}(x) &= 1 - \operatorname{erf}(x) \\ &= \frac{2}{\sqrt\pi} \int_x^\infty e^{-t^2}\,dt \\ &= e^{-x^2} \operatorname{erfcx}(x), \end{align} which also defines , the scaled complementary error function (which can be used instead of to avoid arithmetic underflow). Another form of for is known as Craig's formula, after its discoverer: \operatorname{erfc} (x \mid x\ge 0) = \frac{2}{\pi} \int_0^\frac{\pi}{2} \exp\left(-\frac{x^2}{\sin^2 \theta}\right) \,d\theta. This expression is valid only for positive values of , but can be used in conjunction with to obtain for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the of the sum of two non-negative variables is \operatorname{erfc}(x + y \mid x, y \ge 0) = \frac{2}{\pi} \int_0^\frac{\pi}{2} \exp\left(-\frac{x^2}{\sin^2 \theta} - \frac{y^2}{\cos^2 \theta}\right) \,d\theta. Imaginary error function The imaginary error function, denoted , is defined as \begin{align} \operatorname{erfi}(x) &= -i\operatorname{erf}(ix) \\ &= \frac{2}{\sqrt\pi} \int_0^x e^{t^2}\,dt \\ &= \frac{2}{\sqrt\pi} e^{x^2} D(x), \end{align} where is the Dawson function (which can be used instead of to avoid arithmetic overflow \begin{align} i^n\!\operatorname{erfc}(z) &= \int_z^\infty i^{n-1}\!\operatorname{erfc}(\zeta)\,d\zeta \\[6pt] i^0\!\operatorname{erfc}(z) &= \operatorname{erfc}(z) \\ i^1\!\operatorname{erfc}(z) &= \operatorname{ierfc}(z) = \frac{1}{\sqrt\pi} e^{-z^2} - z \operatorname{erfc}(z) \\ i^2\!\operatorname{erfc}(z) &= \tfrac{1}{4} \left( \operatorname{erfc}(z) -2 z \operatorname{ierfc}(z) \right) \\ \end{align} The general recurrence formula is 2 n \cdot i^n\!\operatorname{erfc}(z) = i^{n-2}\!\operatorname{erfc}(z) -2 z \cdot i^{n-1}\!\operatorname{erfc}(z) They have the power series i^n\!\operatorname{erfc}(z) =\sum_{j=0}^\infty \frac{(-z)^j}{2^{n-j}j! \,\Gamma \left( 1 + \frac{n-j}{2}\right)}, from which follow the symmetry properties i^{2m}\!\operatorname{erfc}(-z) =-i^{2m}\!\operatorname{erfc}(z) +\sum_{q=0}^m \frac{z^{2q}}{2^{2(m-q)-1}(2q)! (m-q)!} and i^{2m+1}\!\operatorname{erfc}(-z) =i^{2m+1}\!\operatorname{erfc}(z) +\sum_{q=0}^m \frac{z^{2q+1}}{2^{2(m-q)-1}(2q+1)! (m-q)!}. ==Implementations==
Implementations
As real function of a real argument • In POSIX-compliant operating systems, the header math.h shall declare and the mathematical library libm shall provide the functions erf and erfc (double precision) as well as their single precision and extended precision counterparts erff, erfl and erfcf, erfcl. • The GNU Scientific Library provides erf, erfc, log(erf), and scaled error functions. As complex function of a complex argument • libcerf, numeric C library for complex error functions, provides the complex functions cerf, cerfc, cerfcx and the real functions erfi, erfcx with approximately 13–14 digits precision, based on the Faddeeva function as implemented in the MIT Faddeeva Package ==References==
tickerdossier.comtickerdossier.substack.com