Get Error Function essential facts below. View Videos or join the Error Function discussion. Add Error Function to your PopFlock.com topic list for future reference or share this resource on social media.
Plot of the error function
In mathematics, the error function (also called the Gauss error function), often denoted by erf, is a complex function of a complex variable defined as:
In statistics, for non-negative values of x, the error function has the following interpretation: for a random variableY that is normally distributed with mean 0 and variance 1/2, erf x is the probability that Y falls in the range [-x, x].
Two closely related functions are the complementary error function (erfc) defined as
and the imaginary error function (erfi) defined as
The name "error function" and its abbreviation erf were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors." The error function complement was also discussed by Glaisher in a separate publication in the same year.
For the "law of facility" of errors whose density is given by
When the results of a series of measurements are described by a normal distribution with standard deviation and expected value 0, then is the probability that the error of a single measurement lies between -a and +a, for positive a. This is useful, for example, in determining the bit error rate of a digital communication system.
The integrand f = exp(-z2) and f = erf(z) are shown in the complex z-plane in figures 2 and 3. Level of Im(f) = 0 is shown with a thick green line. Negative integer values of Im(f) are shown with thick red lines. Positive integer values of Im(f) are shown with thick blue lines. Intermediate levels of Im(f) = constant are shown with thin green lines. Intermediate levels of Re(f) = constant are shown with thin red lines for negative values and with thin blue lines for positive values.
The error function at +? is exactly 1 (see Gaussian integral). At the real axis, erf(z) approaches unity at z -> +? and -1 at z -> -?. At the imaginary axis, it tends to ±i?.
The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges, but is famously known "[...] for its bad convergence if x > 1."
An expansion, which converges more rapidly for all real values of than a Taylor expansion, is obtained by using Hans Heinrich Bürmann's theorem:
By keeping only the first two coefficients and choosing and the resulting approximation shows its largest relative error at where it is less than :
Inverse error function
Given a complex number z, there is not a unique complex number w satisfying , so a true inverse function would be multivalued. However, for -1 < x < 1, there is a unique real number denoted satisfying
The inverse error function is usually defined with domain (-1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk |z| < 1 of the complex plane, using the Maclaurin series
where c0 = 1 and
So we have the series expansion (common factors have been canceled from numerators and denominators):
(After cancellation the numerator/denominator fractions are entries / in the OEIS; without cancellation the numerator terms are given in entry .) The error function's value at ±? is equal to ±1.
For |z| < 1, we have .
The inverse complementary error function is defined as
For realx, there is a unique real number satisfying . The inverse imaginary error function is defined as .
For any real x, Newton's method can be used to compute , and for , the following Maclaurin series converges:
where ck is defined as above.
A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is
where (2n - 1)!! is the double factorial of (2n - 1), which is the product of all odd numbers up to (2n - 1). This series diverges for every finite x, and its meaning as asymptotic expansion is that, for any one has
For large enough values of x, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of erfc(x) (while for not too large values of x, the above Taylor expansion at 0 provides a very fast convergence).
Abramowitz and Stegun give several approximations of varying accuracy (equations 7.1.25-28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
where p = 0.3275911, a1 = 0.254829592, a2 = -0.284496736, a3 = 1.421413741, a4 = -1.453152027, a5 = 1.061405429
All of these approximations are valid for x >= 0. To use these approximations for negative x, use the fact that erf(x) is an odd function, so erf(x) = -erf(-x).
Exponential bounds and a pure exponential approximation for the complementary error function are given by 
The above have been generalized to sums of exponentials with increasing accuracy in terms of so that can be accurately approximated or bounded by , where
In particular, there is a systematic methodology to solve the numerical coefficients that yield a minimax approximation or bound for the closely related Q-function: , , or for . The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.
A tight approximation of the complementary error function for is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters that
They determined which gave a good approximation for all
where the parameter ? can be picked to minimize error on the desired interval of approximation.
Another approximation is given by Sergei Winitzki using his "global Padé approximations"::2-3
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the relative error is less than 0.00035 for all real x. Using the alternate value a ? 0.147 reduces the maximum relative error to about 0.00013.
This approximation can be inverted to obtain an approximation for the inverse error function:
An approximation with a maximal error of for any real argument is:
Table of values
Complementary error function
The complementary error function, denoted , is defined as
which also defines , the scaled complementary error function (which can be used instead of erfc to avoid arithmetic underflow). Another form of for non-negative is known as Craig's formula, after its discoverer:
This expression is valid only for positive values of x, but it can be used in conjunction with erfc(x) = 2 - erfc(-x) to obtain erfc(x) for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the of the sum of two non-negative variables is as follows:
Imaginary error function
The imaginary error function, denoted erfi, is defined as
Despite the name "imaginary error function", is real when x is real.
When the error function is evaluated for arbitrary complex arguments z, the resulting complex error function is usually discussed in scaled form as the Faddeeva function:
Cumulative distribution function
The error function is essentially identical to the standard normal cumulative distribution function, denoted ?, also named norm(x) by some software languages, as they differ only by scaling and translation. Indeed,
or rearranged for erf and erfc:
Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as
Graph of generalised error functions En(x): grey curve: E1(x) = (1 - e -x)/ red curve: E2(x) = erf(x) green curve: E3(x) blue curve: E4(x) gold curve: E5(x).
Some authors discuss the more general functions:
Notable cases are:
E0(x) is a straight line through the origin:
E2(x) is the error function, erf(x).
After division by n!, all the En for odd n look similar (but not identical) to each other. Similarly, the En for even n look similar (but not identical) to each other after a simple division by n!. All generalised error functions for n > 0 look similar on the positive x side of the graph.
libcerf, numeric C library for complex error functions, provides the complex functions cerf, cerfc, cerfcx and the real functions erfi, erfcx with approximately 13-14 digits precision, based on the Faddeeva function as implemented in the MIT Faddeeva Package
^H. M. Schöpf and P. H. Supancic, "On Bürmann's Theorem and Its Application to Problems of Linear and Nonlinear Heat Transfer and Diffusion," The Mathematica Journal, 2014. doi:10.3888/tmj.16-11.Schöpf, Supancic
^Zeng, Caibin; Chen, Yang Cuan (2015). "Global Padé approximations of the generalized Mittag-Leffler function and its inverse". Fractional Calculus and Applied Analysis. 18 (6): 1492-1506. arXiv:1310.5592. doi:10.1515/fca-2015-0086. S2CID118148950. Indeed, Winitzki  provided the so-called global Padé approximation
^Winitzki, Sergei (6 February 2008). "A handy approximation for the error function and its inverse". Cite journal requires |journal= (help)
^Numerical Recipes in Fortran 77: The Art of Scientific Computing (ISBN0-521-43064-X), 1992, page 214, Cambridge University Press.
^Behnad, Aydin (2020). "A Novel Extension to Craig's Q-Function Formula and Its Application in Dual-Branch EGC Performance Analysis". IEEE Transactions on Communications. 68 (7): 4117-4125. doi:10.1109/TCOMM.2020.2986209. S2CID216500014.