This post describes the concept of the exterior algebra of a vector space. One of the primary applications of exterior algebra occurs in the theory of differential forms.
Alternating Tensors
Here we will consider (concrete, covariant) -tensors, i.e. multilinear maps .
A -tensor is alternating if its value changes sign whenever any two of its arguments are interchanged:
First, we want to show that there are several equivalent ways to characterize alternating tensors.
We define the index set consisting of indices to be the set
A permutation of an index set is an automorphism of , i.e. a map such that there exists an inverse map such that . The set of all such permutations comprises a group called the permutation group on elements.
The transposition (interchange) of two indices and is the permutation defined as follows:
Note that and .
We want to show that every permutation is the composition of a finite sequence of transpositions. We proceed by induction on the size of the index set . For , it is vacuously true that all permutations (of which there are none) on this set are the composition of a finite sequence of interchanges. Next, suppose that, for some natural number , every permutation on is the composition of a finite sequence of interchanges, and consider an automorphism . Note that is again an automorphism of , since it has inverse . Now, , i.e. is a fixed point, which implies that its restriction to is an automorphism of , since automorphisms are injective and thus the only index that maps to is itself. By inductive hypothesis, this restriction is the composition of a finite sequence of interchanges, and thus so is , since is a fixed point and hence it performs no additional interchanges. Finally, , and hence is the composition of a finite sequence of interchanges.
A permutation is said to be even if it is the composition of an even number of transpositions, and odd if it is the composition of an odd number of transpositions.
The sign of the permutation is the following function:
We can apply a permutation to a -tensor to obtain a new -tensor as follows:
We can thus define an alternating -tensor as one that satisfies
for all transpositions .
We want to show that alternating -tensors are equivalently characterized as those tensors such that any permutation of their arguments causes the value to be multiplied by the sign of the permutation, i.e.
Since every permutation is the composition of a finite sequence of transpositions, we will proceed by induction on the length of the sequence of transpositions. If , i.e. when no transposition is performed and hence the permutation is the trivial permutation (the identity map ), then and and so . Now, suppose that the hypothesis holds for every permutation of length , and consider a permutation of length . Then for some permutation of length and transposition of indices and . It then follows that
Finally note that the converse is immediate: if , then
which is the very definition of an alternating -tensor.
Recall that, for any -tensor on a vector space , given a basis for , the components of the tensor relative to this basis are computed as
Thus, whenever any two indices of the coordinates are interchanged, the tensor changes sign, since this corresponds to the interchange of two of the basis vectors. This provides yet another characterization of alternating tensors.
Next, note that, if is an alternating -tensor and two of its arguments are equal, then by interchanging the equal arguments we obtain
which implies that .
Likewise, if, whenever the arguments are linearly dependent, , then, since any sequence of vectors with a repeated vector is linearly dependent, it follows that .
Now, if yields whenever two of its arguments are equal, then
Thus, is alternating.
Finally, suppose that yields whenever two of its arguments are equal. If is a linearly dependent sequence of vectors, then one can be written as a linear combination of the others. Without loss of generality, we may assume that the may be written as a linear combination of the other vectors (since the sequence may be reordered without changing the linear combination). This means that
Then
In each of the summands, there is a repeated argument (namely, the argument ). Thus, each summand is equal to by hypothesis. Thus, is alternating.
Altogether, this demonstrates that each of the following conditions is equivalent to the others:
- is alternating
- changes sign according to the sign of any permutation applied to it
- yields whenever its arguments are linearly dependent
- yields whenever any two of its arguments are equal
The Alternation Operation
The vector space of all alternating covariant -tensors on a vector space is denoted . We can define an operation called alternation which derives a corresponding alternating tensor from an arbitrary tensor as follows:
The permutation group on elements contains exactly permutations, i.e. . The alternation operation computes the sign-adjusted average of the tensor applied at every permutation of its arguments.
We want to confirm that is indeed alternating.
First, we will demonstrate a small technical lemma. Given any fixed permutation , for each permutation , we can find a permutation such that , namely, the permutation . Likewise, the composition of two permutations is again a permutation. Together, these facts establish the following equality:
Also note that if , then . It follows that .
We then compute for any permutation
This proves that is alternating, since, for any permutation ,
Now, if , then, since is alternating, it follows that is alternating. Conversely, if is alternating, then, for every permutation , , and hence
Thus, if and only if is alternating. This provides another characterization of alternating tensors: they are fixed points of the alternation operation.
Elementary Alternating Tensors
Consider an important example of the alternation operation. Using the vector space and the standard dual basis , we can define the following -tensor:
The determinant of a sequence of column vectors comprising a matrix with elements (i.e. the -th component of the -th vector) is then the alternation of this tensor:
The ultimate expression is precisely the Leibniz formula for .
More generally, we obtain the following:
We will generalize this correspondence to arbitrary vector spaces, obtaining a generalization of the determinant. First, we introduce some notation.
For each positive integer , a multi-index of length is an ordered tuple consisting of indices (positive integers):
We can apply a permutation to to contain a new multi-index of length :
For any -dimensional vector space with dual basis for , given any multi-index of length where , we can define a -tensor (also written ) as follows:
As we previously established, this is equivalent to the following definition:
This generalizes the determinant operation in several ways: first, it applies to any real vector space and not just . Second, it applies to the components of vectors relative to any basis. Third, it applies to any multi-index, not just the contiguous multi-index . Finally, it also permits multi-indices whose length is less than the dimension of the vector space.
Recall that if , then and thus it follows that . Likewise, recall that .
If for some multi-indices and of length and permutation , then:
Thus, .
It then follows that, if has any repeated indices (say, at indices, and ), then for the transposition , and hence .
Given a pair of multi-indices and both of length and a vector space with basis , consider the application of to these basis vectors:
Note that if since by definition, and this becomes the determinant of the identity matrix, which is .
If there are any repeated indices in , then, since is alternating, .
If neither nor contain repeated any repeated index and for some permutation , then
Note that, if is not a permutation of , then, regardless of the permutation , there will always be at least one term where and thus , which means that .
Thus, we have established that
Thus, as we will show below, is similar to an alternating version of the Kronecker delta.
A multi-index is increasing if .
Note that, given an arbitrary multi-index of length , there can be at most one increasing multi-index of length such that is a permutation of .
We denote the set of all increasing multi-indices of length as . Given any vector space , basis for , , and any multi-index whatsoever, we define a term as follows:
Now, given any multi-index of length , if contains any repeated index, then
If does not contain any repeated index, then if for some , it follows that
The penultimate equation is due to the fact that is multilinear.
Thus, is formally similar to the Kronecker delta, and the following notation is often used:
The similarity arises because, as we just established,
Combining this with the fact that there is a unique increasing permutation of , we obtain the following:
Since is multilinear, it is sufficient to consider its action on basis vectors. Thus, we have shown that
Thus, is a linear combination of such terms, and the set
comprises a spanning set for .
Now, if
then, for any increasing multi-index , applying both sides to the basis vectors, we obtain
Thus, since every coefficient is of the form for some increasing multi-index , it follows that every coefficient is . Thus, the set is a basis for .
Summation Convention
We will sometimes be explicit about writing summations over , as in
However, often, we will use the usual summation convention (that an "upper" and "lower" index in the same expression indicates an implicit summation), and simply write
to indicate a sum over increasing multi-indexes.
The Determinant
For an -dimensional vector space , has basis which contains only one element since there is a unique increasing multi-index of length , namely . Thus, every element is of the form
for some . Recall that, given any linear map and basis for with corresponding dual basis for , we can define component functions relative to this basis as , and the map can be represented as a matrix as follows:
Thus, the elements of the matrix are
We then compute
Likewise, since the -th component of is , we compute
Recall that, by definition, the determinant of a linear map is defined to be the determinant of matrix representation of with respect to any basis. Since all involved maps were multilinear, it is sufficient to consider only the basis vectors when determining equality. Thus, putting the previous equations together, we have determined that
In fact, this can be stipulated as the very definition of the determinant of a linear map: it is the unique real number satisfying this equation. This definition, while more abstract, is basis-independent.
The Exterior Product
Given a vector space , the exterior product or wedge product of a -covector and an -covector is the -covector defined as follows:
The coefficient is motivated by two goals. First, as we discovered previously,
In this case, the coefficient becomes , which we used to define the elementary alternating tensors, and it cancels out, so that
For this reason, this definition is called the determinant convention.
Furthermore, we will discover that
where is the concatenation of and .
Since both and are multilinear, it suffices to demonstrate that they are equal when applied to an arbitrary sequence of basis vectors , i.e. we want to show that
If the multi-index contains a repeated index, then, since each side is multilinear, each side equals .
If contains any index that at least one of or does not contain, then, since is not a permutation of . If we expand , we obtain
and either is not a permutation of or is not a permutation of , so the term is equal to also.
Now, if and contains no repeated indices, then the only terms in the above expanded sum that will be nonzero are those such that for some permutation and for some permutation and , where
and
It follows that and (since each respective pair performs the same number of transpositions) and thus . We can then rewrite the final term of the previous sum and expand as follows:
Since also , both sides are again equal.
Finally, if for some permutation , then both sides of the equation will be adjusted by , and so the equation will be preserved.
Properties of the Wedge Product
The exterior product satisfies several important properties. In practice, it is often sufficient to use these properties alone when working with exterior forms.
Bilinearity
For all ,
This is a consequence of the fact that both the tensor product and the alternation operation are multilinear.
Associativity
The wedge product is associated, namely
Given any basis for , we can express every exterior form as
where we use the summation convention and ranges over . We then compute
Because the wedge product is associative, parentheses are usually omitted.
Anticommutativity
If and , then
First, note that it is possible to permute a multi-index into via a permutation that proceeds as follows:
In other words, each of the indices in , beginning with the last and ending with the first, is moved, via a sequence of adjacent transpositions, to the end of the multi-index, resulting in transpositions total. Thus, if we denote the composite permutation as , then .
We compute
Elementary k-Covectors
Given any basis for and multi-index , then
This can be demonstrated via induction on , starting with , in which case the equation reduces to the identity . If the hypothesis holds for for some , then
Generalized Determinant
For any covectors and vectors ,
Note that
Uniqueness of the Wedge Product
The wedge product is the unique operation satisfying the properties enumerated in the previous section. Suppose that there is another product satisfying the same properties, which we will denote as . Then, since
and
it follows that
Then, we compute
There is an isomorphism witnessed by the following linear map:
This map is only explicitly defined for "pure" tensors , but, since such tensors span the tensor product space , all tensors in are linear combinations of such "pure" tensors, and thus the above definition is sufficient.
Given a basis for with dual basis for , the inverse map is
where
Since these terms are multilinear, it is sufficient to verify their application to an arbitrary sequence of basis vectors for a multi-index :
Conversely, we compute
Note that, although a basis was used to exhibit an inverse function, the isomorphism itself is canonical (basis-independent).
Recall that, given a basis for with dual basis for and a basis for , the tensor product space has basis
and each tensor can be written as
where
It then follows that
and, thus, if we define functions
then
Thus, every element of can be written in this form, which is analogous to a linear combination. Using the isomorphism, we can define a multilinear map as follows:
Thus, since the map can be written in terms of unique component functions , this implies that the as defined above coincide with the component functions. We can also prove this equivalence by defining to be the component functions and then working in the opposite direction:
Then, since the coefficients are unique, it follows that = , and the function is the same as the coordinate function , which can be represented as .
Thus, whether we start with tensor components or the component functions, the functions defined coincide regardless.
We may now state the definition of a vector-valued exterior form. A vector-valued exterior form is equivalently any of the following:
- An element of ; each such element is expressible as for .
- An element of ; each such element is expressible as for .
Thus, a vector-valued exterior form is just a collection of scalar-valued exterior forms, arranged into a "linear combination" with a given selection of basis vectors.
The Exterior Algebra
We will only briefly mention the related algebraic constructions.
The space is called the -th exterior power of . We have only given a concrete description for this space in terms of alternating, multi-linear maps. It is possible to define the -th exterior power for arbitrary vector spaces, using the same techniques used to define abstract tensor product spaces. We will not describe the abstract construction in this post. Each of the exterior powers can be joined together into a vector space , called the exterior algebra of :
This space is also an associative algebra. This algebra is a graded, anticommutative algebra.