Get Finite Difference essential facts below. View Videos or join the Finite Difference discussion. Add Finite Difference to your PopFlock.com topic list for future reference or share this resource on social media.
If h has a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be written
Hence, the forward difference divided by h approximates the derivative when h is small. The error in this approximation can be derived from Taylor's theorem. Assuming that f is differentiable, we have
The same formula holds for the backward difference:
However, the central (also called centered) difference yields a more accurate approximation. If f is twice differentiable,
The main problem with the central difference method, however, is that oscillating functions can yield zero derivative. If f (nh) = 1 for n odd, and f (nh) = 2 for n even, then f ?(nh) = 0 if it is calculated with the central difference scheme. This is particularly troublesome if the domain of f is discrete. See also Symmetric derivative
Authors for whom finite differences mean finite difference approximations define the forward/backward/central differences as the quotients given in this section (instead of employing the definitions given in the previous section).
In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for f ?(x + ) and f ?(x - ) and applying a central difference formula for the derivative of f ? at x, we obtain the central difference approximation of the second derivative of f:
Similarly we can apply other differencing formulas in a recursive manner.
Second order forward
Second order backward
More generally, the nth order forward, backward, and central differences are given by, respectively,
Note that the central difference will, for odd n, have h multiplied by non-integers. This is often a problem because it amounts to changing the interval of discretization. The problem may be remedied taking the average of ?n[ f ](x - ) and ?n[ f ](x + ).
Forward differences applied to a sequence are sometimes called the binomial transform of the sequence, and have a number of interesting combinatorial properties. Forward differences may be evaluated using the Nörlund–Rice integral. The integral representation for these types of series is interesting, because the integral can often be evaluated using asymptotic expansion or saddle-point techniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for large n.
The relationship of these higher-order differences with the respective derivatives is straightforward,
Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of order h. However, the combination
approximates f ?(x) up to a term of order h2. This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below.
If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences.
Arbitrarily sized kernels
Using linear algebra one can construct finite difference approximations which utilize an arbitrary number of points to the left and a (possibly different) number of points to the right of the evaluation point, for any order derivative. This involves solving a linear system such that the Taylor expansion of the sum of those points around the evaluation point best approximates the Taylor expansion of the desired derivative. Such formulas can be represented graphically on a hexagonal or diamond-shaped grid.
This is useful for differentiating a function on a grid, where, as one approaches the edge of the grid, one must sample fewer and fewer points on one side.
The Newton series consists of the terms of the Newton forward difference equation, named after Isaac Newton; in essence, it is the Newton interpolation formula, first published in his Principia Mathematica in 1687, namely the discrete analog of the continuous Taylor expansion,
which holds for any polynomial function f and for many (but not all) analytic functions (It does not hold when f is exponential type. This is easily seen, as the sine function vanishes at integer multiples of ; the corresponding Newton series is identically zero, as all finite differences are zero in this case. Yet clearly, the sine function is not zero.). Here, the expression
is the "falling factorial" or "lower factorial", while the empty product(x)0 is defined to be 1. In this particular case, there is an assumption of unit steps for the changes in the values of x, h = 1 of the generalization below.
To illustrate how one may use Newton's formula in actual practice, consider the first few terms of doubling the Fibonacci sequencef = 2, 2, 4, ... One can find a polynomial that reproduces these values, by first computing a difference table, and then substituting the differences that correspond to x0 (underlined) into the formula as follows,
The finite difference of higher orders can be defined in recursive manner as ?n h ? ?h(?n - 1 h). Another equivalent definition is ?n h = [Th - I]n.
The difference operator ?h is a linear operator, as such it satisfies ?h[?f + ?g](x) = ? ?h[ f ](x) + ? ?h[g](x).
It also satisfies a special Leibniz rule indicated above,
?h(f (x)g(x)) = (?hf (x)) g(x+h) + f (x) (?hg(x)). Similar statements hold for the backward and central differences.
Formally applying the Taylor series with respect to h, yields the formula
where D denotes the continuum derivative operator, mapping f to its derivative f ?. The expansion is valid when both sides act on analytic functions, for sufficiently small h. Thus, Th = ehD, and formally inverting the exponential yields
This formula holds in the sense that both operators give the same result when applied to a polynomial.
Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to f ?(x) mentioned at the end of the section Higher-order differences.
The analogous formulas for the backward and central difference operators are
The calculus of finite differences is related to the umbral calculus of combinatorics. This remarkably systematic correspondence is due to the identity of the commutators of the umbral quantities to their continuum analogs (h -> 0 limits),
A large number of formal differential relations of standard calculus involving
functions f (x) thus map systematically to umbral finite-difference analogs involving f (xT-1 h).
For instance, the umbral analog of a monomial xn is a generalization of the above falling factorial (Pochhammer k-symbol),
hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary function f (x) in such symbols), and so on.
For example, the umbral sine is
As in the continuum limit, the eigenfunction of also happens to be an exponential,
and hence Fourier sums of continuum functions are readily mapped to umbral Fourier sums faithfully, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials. This umbral exponential thus amounts to the exponential generating function of the Pochhammer symbols.
A generalized finite difference is usually defined as
where ? = (?0,... ?N) is its coefficient vector. An infinite difference is a further generalization, where the finite sum above is replaced by an infinite series. Another way of generalization is making coefficients ?k depend on point x: ?k = ?k(x), thus considering weighted finite difference. Also one may make the step h depend on point x: h = h(x). Such generalizations are useful for constructing different modulus of continuity.
The generalized difference can be seen as the polynomial rings R[Th]. It leads to difference algebras.
As a convolution operator: Via the formalism of incidence algebras, difference operators and other Möbius inversion can be represented by convolution with a function on the poset, called the Möbius function?; for the difference operator, ? is the sequence (1, -1, 0, 0, 0, ...).
Multivariate finite differences
Finite differences can be considered in more than one variable. They are analogous to partial derivatives in several variables.
Some partial derivative approximations are:
Alternatively, for applications in which the computation of f is the most costly step, and both first and second derivatives must be computed, a more efficient formula for the last case is
since the only values to compute that are not already needed for the previous four equations are f (x + h, y + k) and f (x - h, y - k).