Linear Form
Get Linear Form essential facts below. View Videos or join the Linear Form discussion. Add Linear Form to your PopFlock.com topic list for future reference or share this resource on social media.
Linear Form

In linear algebra, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars. If vectors are represented as column vectors (as is the Wikipedia convention), then linear functionals are represented as row vectors, and their action on vectors is given by the matrix product with the row vector on the left and the column vector on the right. In general, if V is a vector space over a field k, then a linear functional f is a function from V to k that is linear:

${\displaystyle f({\vec {v}}+{\vec {w}})=f({\vec {v}})+f({\vec {w}})}$ for all ${\displaystyle {\vec {v}},{\vec {w}}\in V}$
${\displaystyle f(a{\vec {v}})=af({\vec {v}})}$ for all ${\displaystyle {\vec {v}}\in V,a\in k.}$

The set of all linear functionals from V to k, denoted by Homk(V,k), forms a vector space over k with the operations of addition and scalar multiplication defined pointwise. This space is called the dual space of V, or sometimes the algebraic dual space, to distinguish it from the continuous dual space.  It is often written V*, V?, V# or V when the field k is understood.

## Examples

The "constant zero function," mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is surjective (i.e. its range is all of k).

### Linear functionals in Rn

Suppose that vectors in the real coordinate space Rn are represented as column vectors

${\displaystyle {\vec {x}}={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.}$

For each row vector [a1 ... an] there is a linear functional f defined by

${\displaystyle f({\vec {x}})=a_{1}x_{1}+\cdots +a_{n}x_{n},}$

and each linear functional can be expressed in this form.

This can be interpreted as either the matrix product or the dot product of the row vector [a1 ... an] and the column vector ${\displaystyle {\vec {x}}}$:

${\displaystyle f({\vec {x}})=\left[a_{1}\dots a_{n}\right]{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.}$

### (Definite) Integration

Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral

${\displaystyle I(f)=\int _{a}^{b}f(x)\,dx}$

is a linear functional from the vector space C[ab] of continuous functions on the interval [ab] to the real numbers. The linearity of I follows from the standard facts about the integral:

{\displaystyle {\begin{aligned}I(f+g)&=\int _{a}^{b}[f(x)+g(x)]\,dx=\int _{a}^{b}f(x)\,dx+\int _{a}^{b}g(x)\,dx=I(f)+I(g)\\I(\alpha f)&=\int _{a}^{b}\alpha f(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx=\alpha I(f).\end{aligned}}}

### Evaluation

Let Pn denote the vector space of real-valued polynomial functions of degree n defined on an interval [ab].  If c ? [ab], then let be the evaluation functional

${\displaystyle \operatorname {ev} _{c}f=f(c).}$

The mapping f -> f(c) is linear since

{\displaystyle {\begin{aligned}(f+g)(c)&=f(c)+g(c)\\(\alpha f)(c)&=\alpha f(c).\end{aligned}}}

If x0, ..., xn are distinct points in , then the evaluation functionals form a basis of the dual space of Pn.  (Lax (1996) proves this last fact using Lagrange interpolation.)

### Non-example

A function f having the equation of a line f(x) = a + rx with a ? 0 (e.g. f(x) = 1 + 2x) is not a linear functional on R, since it is not linear.[nb 1] It is, however, affine-linear.

## Visualization

Geometric interpretation of a 1-form ? as a stack of hyperplanes of constant value, each corresponding to those vectors that ? maps to a given scalar value shown next to it along with the "sense" of increase. The   zero plane is through the origin.

In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value.  In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes.  This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by Misner, Thorne & Wheeler (1973).

## Applications

If x0, ..., xn are n + 1 distinct points in [a, b], then the linear functionals evxi : f -> f(xi) defined above form a basis of the dual space of Pn, the space of polynomials of degree n. The integration functional I is also a linear functional on Pn, and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficients a0, ..., an for which

${\displaystyle I(f)=a_{0}f(x_{0})+a_{1}f(x_{1})+\dots +a_{n}f(x_{n})}$

for all f ? Pn. This forms the foundation of the theory of numerical quadrature.[1]

### In quantum mechanics

Linear functionals are particularly important in quantum mechanics.  Quantum mechanical systems are represented by Hilbert spaces, which are antiisomorphic to their own dual spaces.  A state of a quantum mechanical system can be identified with a linear functional.  For more information see bra-ket notation.

### Distributions

In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions.

## Dual vectors and bilinear forms

Linear functionals (1-forms) ?, ? and their sum ? and vectors u, v, w, in 3d Euclidean space. The number of (1-form) hyperplanes intersected by a vector equals the inner product.[2]

Every non-degenerate bilinear form on a finite-dimensional vector space V induces an isomorphism such that

${\displaystyle v^{*}(w):=\langle v,w\rangle \quad \forall w\in V,}$

where the bilinear form on V is denoted (for instance, in Euclidean space is the dot product of v and w).

The inverse isomorphism is , where v is the unique element of V such that

${\displaystyle \langle v,w\rangle =v^{*}(w)\quad \forall w\in V.}$

The above defined vector is said to be the dual vector of .

In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem.  There is a mapping into the continuous dual space V*

## Relationship to bases

### Basis of the dual space

Let the vector space V have a basis ${\displaystyle {\vec {e}}_{1},{\vec {e}}_{2},\dots ,{\vec {e}}_{n}}$, not necessarily orthogonal.  Then the dual space V* has a basis ${\displaystyle {\tilde {\omega }}^{1},{\tilde {\omega }}^{2},\dots ,{\tilde {\omega }}^{n}}$ called the dual basis defined by the special property that

${\displaystyle {\tilde {\omega }}^{i}({\vec {e}}_{j})=\left\{{\begin{matrix}1&\mathrm {if} \ i=j\\0&\mathrm {if} \ i\not =j.\end{matrix}}\right.}$

Or, more succinctly,

${\displaystyle {\tilde {\omega }}^{i}({\vec {e}}_{j})=\delta _{ij}}$

where ? is the Kronecker delta.  Here the superscripts of the basis functionals are not exponents but are instead contravariant indices.

A linear functional ${\displaystyle {\tilde {u}}}$ belonging to the dual space ${\displaystyle {\tilde {V}}}$ can be expressed as a linear combination of basis functionals, with coefficients ("components") ui,

${\displaystyle {\tilde {u}}=\sum _{i=1}^{n}u_{i}\,{\tilde {\omega }}^{i}.}$

Then, applying the functional ${\displaystyle {\tilde {u}}}$ to a basis vector ej yields

${\displaystyle {\tilde {u}}({\vec {e}}_{j})=\sum _{i=1}^{n}\left(u_{i}\,{\tilde {\omega }}^{i}\right){\vec {e}}_{j}=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left({\vec {e}}_{j}\right)\right]}$

due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals.  Then

{\displaystyle {\begin{aligned}{\tilde {u}}({\vec {e}}_{j})&=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left({\vec {e}}_{j}\right)\right]=\sum _{i}u_{i}{\delta ^{i}}_{j}\\&=u_{j}.\end{aligned}}}

So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector.

### The dual basis and inner product

When the space V carries an inner product, then it is possible to write explicitly a formula for the dual basis of a given basis.  Let V have (not necessarily orthogonal) basis ${\displaystyle {\vec {e}}_{1},\dots ,{\vec {e}}_{n}}$.  In three dimensions , the dual basis can be written explicitly

${\displaystyle {\tilde {\omega }}^{i}({\vec {v}})={1 \over 2}\,\left\langle {\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon ^{ijk}\,({\vec {e}}_{j}\times {\vec {e}}_{k}) \over {\vec {e}}_{1}\cdot {\vec {e}}_{2}\times {\vec {e}}_{3}},{\vec {v}}\right\rangle ,}$

for i = 1, 2, 3, where ? is the Levi-Civita symbol and ${\displaystyle \langle ,\rangle }$ the inner product (or dot product) on V.

In higher dimensions, this generalizes as follows

${\displaystyle {\tilde {\omega }}^{i}({\vec {v}})=\left\langle {\frac {{\underset {{}^{1\leq i_{2}

where ${\displaystyle \star }$ is the Hodge star operator.

## Change of field

Any vector space X over C is also a vector space over R, endowed with a complex structure; that is, there exists a real vector subspace XR such that we can (formally) write X = XR ? XRi as R-vector spaces. Every C-linear functional on X is a R-linear operator, but it is not an R-linear functional on X, because its range (namely, C) is 2-dimensional over R. (Conversely, a R-linear functional has range too small to be a C-linear functional as well.)

However, every C-linear functional uniquely determines an R-linear functional on XR by restriction. More surprisingly, this result can be reversed: every R-linear functional g on X induces a canonical C-linear functional Lg ? X#, such that the real part of Lg is g: define

Lg(x) := g(x) - i g(ix) for all x ? X.

L is R-linear (i.e. Lg+h = Lg + Lh and Lrg = r Lg for all r ? R and g, h ? XR#). Similarly, the inverse of the surjection Hom(X,ℂ) -> Hom(X,R) defined by f ? Im f is the map I ? (x ? I(ix) + i I(x)).

This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray),[3] and can be generalized to arbitrary finite extensions of a field in the natural way.

## In infinite dimensions

Below, all vector spaces are over either the real numbers R or the complex numbers C.

If V is a topological vector space, the space of continuous linear functionals -- the continuous dual -- is often simply called the dual space. If V is a Banach space, then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual space. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual.

A linear functional f on a (not necessarily locally convex) topological vector space X is continuous if and only if there exists a continuous seminorm p on X such that || p.[4]

### Characterizing closed subspaces

Continuous linear functionals have nice properties for analysis: a linear functional is continuous if and only if its kernel is closed,[5] and a non-trivial continuous linear functional is an open map, even if the (topological) vector space is not complete.[6]

#### Hyperplanes and maximal subspaces

A vector subspace M of X is called maximal if MX, but there are no vector subspaces N satisfying MNX. M is maximal if and only if it is the kernel of some non-trivial linear functional on X (i.e. M = ker f for some non-trivial linear functional f on X). A hyperplane in X is a translate of a maximal vector subspace. By linearity, a subset H of X is a hyperplane if and only if there exists some non-trivial linear functional f on X such that H = { x ? X : f(x) = 1}.[3]

#### Relationships between multiple linear functionals

Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem.

Theorem[7][8] — If f, g1, ..., gn are linear functionals on X, then the following are equivalent:

1. f can be written as a linear combination of g1, ..., gn (i.e. there exist scalars s1, ..., sn such that f = s1g1 + + sngn);
2. ?n
i=1
Ker gi ? Ker f
;
3. there exists a real number r such that || r || for all x ? X and all i.

If f is a non-trivial linear functional on X with kernel N, x ? X satisfies f(x) = 1, and U is a balanced subset of X, then N ∩ (x + U) = ? if and only if |f(u)| < 1 for all u ? U.[6]

### Hahn-Banach theorem

Any (algebraic) linear functional on a vector subspace can be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all of R. However, this extension cannot always be done while keeping the linear functional continuous. The Hahn-Banach family of theorems gives conditions under which this extension can be done. For example,

Hahn-Banach dominated extension theorem[9](Rudin 1991, Th. 3.2) — If p : X -> R is a sublinear function, and f : M -> R is a linear functional on a linear subspace M ? X which is dominated by p on M, then there exists a linear extension F : X -> R of f to the whole space X that is dominated by p, i.e., there exists a linear functional F such that

F(m) = f(m)     for all m ? M,
|| p(x)     for all x ? X.

### Equicontinuity of families of linear functionals

Let X be a topological vector space (TVS) with continuous dual space X.

For any subset H of X, the following are equivalent:[10]

1. H is equicontinuous;
2. H is contained in the polar of some neighborhood of 0 in X;
3. the (pre)polar of H is a neighborhood of 0 in X;

If H is an equicontinuous subset of X then the following sets are also equicontinuous: the weak-* closure, the balanced hull, the convex hull, and the convex balanced hull.[10] Moreover, Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of X is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact).[11][10]

## Notes

1. ^ For instance, f(1 + 1) = a + 2r ? 2a + 2r = f(1) + f(1).

## References

1. ^ Lax 1996
2. ^ J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 57. ISBN 0-7167-0344-0.
3. ^ a b Narici & Beckenstein 2011, pp. 10-11.
4. ^ Narici & Beckenstein 2011, p. 126.
5. ^ Rudin 1991, Theorem 1.18
6. ^ a b Narici & Beckenstein 2011, p. 128.
7. ^ Rudin 1991, pp. 63-64.
8. ^ Narici & Beckenstein 2011, pp. 1-18.
9. ^ Narici & Beckenstein 2011, pp. 177-220.
10. ^ a b c Narici & Beckenstein 2011, pp. 225-273.
11. ^ Schaefer & Wolff 1999, Corollary 4.3.