Exterior Algebra

This post examines the concept of the exterior algebra of a vector space.

Exterior Algebra
An artistic depiction of the exterior algebra, generated by Google Gemini using the prompt "generate an artistic depiction of the exterior algebra of a vector space".

This post describes the concept of the exterior algebra of a vector space. One of the primary applications of exterior algebra occurs in the theory of differential forms.

Alternating Tensors

Here we will consider (concrete, covariant) k-tensors, i.e. multilinear maps α:V1××VkR.

A k-tensor is alternating if its value changes sign whenever any two of its arguments are interchanged:

α(v1,,vi,,vj,,vk)=α(v1,,vj,,vi,,vk).

First, we want to show that there are several equivalent ways to characterize alternating tensors.

We define the index set consisting of k indices to be the set

Ik={1,,k}.

A permutation of an index set Ik is an automorphism of Ik, i.e. a map σ:IkIk such that there exists an inverse map σ1:IkIk such that σσ1=σ1σ=IdIk. The set of all such permutations comprises a group Sk called the permutation group on k elements.

The transposition (interchange) of two indices 1ik and 1jk is the permutation τi,jSk defined as follows:

τi,j(x)={j,if x=ii,if x=jx,otherwise.

Note that τi,j=τj,i and τi,jτi,j=IdI.

We want to show that every permutation is the composition of a finite sequence of transpositions. We proceed by induction on the size of the index set Ik. For I0=, it is vacuously true that all permutations (of which there are none) on this set are the composition of a finite sequence of interchanges. Next, suppose that, for some natural number k, every permutation on Ik is the composition of a finite sequence of interchanges, and consider an automorphism σ:Ik+1Ik+1. Note that τk+1,σ(k+1)σ is again an automorphism of Ik+1, since it has inverse σ1τk+1,σ(k+1). Now, τk+1,σ(k+1)σ(k+1)=k+1, i.e. k+1 is a fixed point, which implies that its restriction to Ik is an automorphism of Ik, since automorphisms are injective and thus the only index that maps to k+1 is k+1 itself. By inductive hypothesis, this restriction is the composition of a finite sequence of interchanges, and thus so is τk+1,σ(k+1)σ, since k+1 is a fixed point and hence it performs no additional interchanges. Finally, σ=τk+1,σ(k+1)τk+1,σ(k+1)σ, and hence is the composition of a finite sequence of interchanges.

A permutation is said to be even if it is the composition of an even number of transpositions, and odd if it is the composition of an odd number of transpositions.

The sign of the permutation is the following function:

sgn(σ)={+1,if σ is even1,if σ is odd.

We can apply a permutation σIk to a k-tensor α to obtain a new k-tensor σα as follows:

σα(v1,,vk)=α(vσ(1),,vσ(k)).

We can thus define an alternating k-tensor as one that satisfies

α=τi,jα

for all transpositions τi,j.

We want to show that alternating k-tensors are equivalently characterized as those tensors such that any permutation of their arguments causes the value to be multiplied by the sign of the permutation, i.e.

α=(sgn(σ))(σα).

Since every permutation is the composition of a finite sequence of transpositions, we will proceed by induction on the length n of the sequence of transpositions. If n=0, i.e. when no transposition is performed and hence the permutation is the trivial permutation (the identity map IdIk), then sgn(IdIk)=1 and α=σα and so α=(sgn(σ))(σα). Now, suppose that the hypothesis holds for every permutation σSk of length n, and consider a permutation σSk of length n+1. Then σ=τi,jσ for some permutation σSk of length n and transposition τi,j of indices i and j. It then follows that

α(vσ(1),,vσ(k))=α(vτi,j(σ(1)),,vτi,j(σ(k)))By Definition=α(vσ(1),,vσ(k))Since α is alternating=sgn(σ)α(v1,,vk)By Hypothesis=sgn(σ)α(v1,,vk)Since sgn(σ)=sgn(σ)

Finally note that the converse is immediate: if α=(sgn(σ))(σα), then

α=(sgn(τi,j))(τi,jα)=τi,jα,

which is the very definition of an alternating k-tensor.

Recall that, for any k-tensor α on a vector space V, given a basis (Ei) for V, the components of the tensor relative to this basis are computed as

αi1,,ik=α(Ei1,,Eik).

Thus, whenever any two indices of the coordinates are interchanged, the tensor changes sign, since this corresponds to the interchange of two of the basis vectors. This provides yet another characterization of alternating tensors.

Next, note that, if α is an alternating k-tensor and two of its arguments are equal, then by interchanging the equal arguments we obtain

α(v1,,w,,w,,vk)=α(v1,,w,,w,,vk)

which implies that α(v1,,w,,w,,vk)=0.

Likewise, if, whenever the arguments are linearly dependent, α(v1,,vk)=0, then, since any sequence of vectors with a repeated vector is linearly dependent, it follows that α(v1,,w,,w,,vk)=0.

Now, if α yields 0 whenever two of its arguments are equal, then

0=α(v1,,vi+vj,,vi+vj,,vk)=α(v1,,vi,,vi,,vk)+α(v1,,vi,,vj,,vk)+α(v1,,vj,,vi,,vk)+α(v1,,vj,,vj,,vk)=α(v1,,vi,,vj,,vk)+α(v1,,vj,,vi,,vk).

Thus, α is alternating.

Finally, suppose that α yields 0 whenever two of its arguments are equal. If (v1,,vk) is a linearly dependent sequence of vectors, then one can be written as a linear combination of the others. Without loss of generality, we may assume that the vk may be written as a linear combination of the other vectors (since the sequence may be reordered without changing the linear combination). This means that

vk=i=1k1aivi.

Then

α(v1,,vk)=α(v1,,vk1,i=1k1aivi)=i=1k1aiα(v1,,vk1,vi)Since α is multilinear.

In each of the summands, there is a repeated argument (namely, the argument vi). Thus, each summand is equal to 0 by hypothesis. Thus, α is alternating.

Altogether, this demonstrates that each of the following conditions is equivalent to the others:

  • α is alternating
  • α changes sign according to the sign of any permutation applied to it
  • α yields 0 whenever its arguments are linearly dependent
  • α yields 0 whenever any two of its arguments are equal

The Alternation Operation

The vector space of all alternating covariant k-tensors on a vector space V is denoted Λk(V). We can define an operation Alt:Tk(V)Λk(V) called alternation which derives a corresponding alternating tensor from an arbitrary tensor as follows:

Alt(α)=1k!σSk(sgn(σ))(σα).

The permutation group Sk on k elements contains exactly k! permutations, i.e. |Sk|=k!. The alternation operation computes the sign-adjusted average of the tensor applied at every permutation of its arguments.

We want to confirm that Alt(α) is indeed alternating.

First, we will demonstrate a small technical lemma. Given any fixed permutation τSk, for each permutation ηSk, we can find a permutation σSk such that η=στ, namely, the permutation σ=ητ1. Likewise, the composition of two permutations is again a permutation. Together, these facts establish the following equality:

{στσSk}=Sk.

Also note that if η=στ, then sgn η=(sgn σ)(sgn τ). It follows that sgn σ=(sgn τ)(sgn η).

We then compute for any permutation τSk

(Alt α)(vτ(1),,vτ(k))=1k!σSk(sgn σ)(σα)(vτ(1),,vτ(k))=1k!σSk(sgn σ)α(vστ(1),,vστ(k))=1k!ηSk(sgn τ)(sgn η)α(vη(1),,vη(k))=(sgn τ)(Alt α)(v1,,vk).

This proves that Alt α is alternating, since, for any permutation τ,

Alt α=(sgn τ)(τAlt α).

Now, if Alt α=α, then, since Alt α is alternating, it follows that α is alternating. Conversely, if α is alternating, then, for every permutation σSk, α=(sgn σ)(σα), and hence

Alt α=1k!σSk(sgn σ)(σα)=1k!σSkα=α.

Thus, Alt α=α if and only if α is alternating. This provides another characterization of alternating tensors: they are fixed points of the alternation operation.

Elementary Alternating Tensors

Consider an important example of the alternation operation. Using the vector space Rk and the standard dual basis (ei), we can define the following k-tensor:

k!e1ek.

The determinant det(v1,,vk) of a sequence of column vectors comprising a matrix A=[v1vk] with elements ai,j=vji (i.e. the i-th component of the j-th vector) is then the alternation of this tensor:

det(v1,,vk)=Alt (k!e1ek)(v1,,vk)=1k!σSk(sgn σ)(k!e1ek)(vσ(1),,vσ(k))=σSk(sgn σ)(e1ek)(vσ(1),,vσ(k))=σSk(sgn σ)(e1(vσ(1))ek(vσ(k)))=σSk(sgn σ)(vσ(1)1vσ(k)k)=σSk(sgn σ)i=1kai,σ(i).

The ultimate expression is precisely the Leibniz formula for det(A).

More generally, we obtain the following:

Alt (k!ω1ωk)(v1,,vk)=1k!σSk(sgn σ)(k!ω1ωk)(vσ(1),,vσ(k))=σSk(sgn σ)(ω1ωk)(vσ(1),,vσ(k))=σSk(sgn σ)(ω1(vσ(1))ωk(vσ(k)))=σSk(sgn σ)i=1kωi(vσ(i))=det(ωi(vj)).

We will generalize this correspondence to arbitrary vector spaces, obtaining a generalization of the determinant. First, we introduce some notation.

For each positive integer k, a multi-index of length k is an ordered tuple consisting of k indices (positive integers):

I=(i1,...,ik).

We can apply a permutation σSk to I to contain a new multi-index of length k:

σI=(σ(i1),,σ(ik)).

For any n-dimensional vector space V with dual basis (εi) for V, given any multi-index I=(i1,,ik) of length k where 1kn, we can define a k-tensor εI (also written εi1ik) as follows:

εI=k!Alt(εi1εik).

As we previously established, this is equivalent to the following definition:

εI(v1,,vk)=det[εi1(v1)εi1(vk)εik(v1)εik(vk)]=det[v1i1vki1v1ikvkik].

This generalizes the determinant operation in several ways: first, it applies to any real vector space and not just Rn. Second, it applies to the components of vectors relative to any basis. Third, it applies to any multi-index, not just the contiguous multi-index (1,,k). Finally, it also permits multi-indices whose length is less than the dimension of the vector space.

Recall that if η=στ, then sgn η=(sgn σ)(sgn τ) and thus it follows that sgn σ=(sgn τ)(sgn η). Likewise, recall that {στσSk}=Sk.

If I=τJ for some multi-indices I and J of length k and permutation τSk, then:

εI=ετJ=k!Alt(ετ(j1)ετ(jk))=σSk(sgn σ)εστ(j1)εστ(jk)=ηSk(sgn τ)(sgn η)εη(j1)εη(jk)=(sgn τ)ηSk(sgn η)εη(j1)εη(jk)=(sgn τ)k!Alt(εη(j1)εη(jk))=(sgn τ)εJ.

Thus, ετJ=(sgn τ)εJ.

It then follows that, if I has any repeated indices (say, at indices, i and j), then εI=ετi,jI=(sgn τi,j)εI=εI for the transposition τi,j, and hence εI=0.

Given a pair of multi-indices I=(i1,,ik) and J=(j1,,jk) both of length k and a vector space V with basis (Ej1,,Ejk), consider the application of εI to these basis vectors:

εI(Ej1,,Ejk).

Note that εI(Ej1,,Ejk) if I=J since εi(Ei)=1 by definition, and this becomes the determinant of the identity matrix, which is 1.

If there are any repeated indices in J, then, since εI is alternating, εI(Ej1,,Ejk)=0.

If neither I nor J contain repeated any repeated index and J=σI for some permutation σSk, then

εI(Ej1,,Ejk)=εσJ(Ej1,,Ejk)=(sgn σ)εJ(Ej1,,Ejk)=(sgn σ).

Note that, if J is not a permutation of I, then, regardless of the permutation σ, there will always be at least one term where inσ(jn) and thus εin(Eσ(jn))=0, which means that εI(Ej1,,Ejk)=0.

Thus, we have established that

εI(Ej1,,Ejk)={sgn σif neither I nor J contains any repeated index and J=σI for some σSk,0if either I or J contains any repeated index or J is not a permutation of I.

Thus, as we will show below, εI(Ej1,,Ejk) is similar to an alternating version of the Kronecker delta.

A multi-index I=(i1,,ik) is increasing if i1<<ik.

Note that, given an arbitrary multi-index J of length k, there can be at most one increasing multi-index I of length k such that J is a permutation of I.

We denote the set of all increasing multi-indices of length k as Ik. Given any vector space V, basis (Ei) for V, αΛk(V), and any multi-index I whatsoever, we define a term as follows:

αI=α(Ei1,,Eik).

Now, given any multi-index J of length k, if J contains any repeated index, then

αIεI(Ej1,,Ejk)=αI0=0=αJ=α(Ej1,,Ejk).

If J does not contain any repeated index, then if J=σI for some σSk, it follows that

αIεI(Ej1,,Ejk)=αI(sgn σ)=αJ=α(Ej1,,Ejk).

The penultimate equation is due to the fact that α is multilinear.

Thus, εI(Ej1,,Ejk) is formally similar to the Kronecker delta, and the following notation is often used:

δJI=εI(Ej1,,Ejk).

The similarity arises because, as we just established,

αIδJI={αJif J is a permutation of I0otherwise.

Combining this with the fact that there is a unique increasing permutation of J, we obtain the following:

IIkαIεI(Ej1,,Ejk)=IIkαIδJI=αJ=α(Ej1,,Ejk).

Since α is multilinear, it is sufficient to consider its action on basis vectors. Thus, we have shown that

α=IIkαIεI.

Thus, α is a linear combination of such terms, and the set

Ek={εI:IIk}

comprises a spanning set for Λk(V).

Now, if

IIkαIεI=0,

then, for any increasing multi-index J, applying both sides to the basis vectors, we obtain

0=IIkαIεI(Ej1,,Ejk)=IIkαIδJI=αJ.

Thus, since every coefficient is of the form αJ for some increasing multi-index J, it follows that every coefficient is 0. Thus, the set Ek is a basis for Λk(V).

Summation Convention

We will sometimes be explicit about writing summations over Ik, as in

IIkαIεI.

However, often, we will use the usual summation convention (that an "upper" and "lower" index in the same expression indicates an implicit summation), and simply write

αIεI

to indicate a sum over increasing multi-indexes.

The Determinant

For an n-dimensional vector space V, Λn(V) has basis En which contains only one element since there is a unique increasing multi-index of length n, namely (1,,n). Thus, every element ωΛn(V) is of the form

ω=cε1n

for some cR. Recall that, given any linear map A:VV and basis (Ei) for V with corresponding dual basis (εi) for V, we can define component functions Ai:VR relative to this basis as A(v)=Ai(v)Ei, and the map A can be represented as a matrix as follows:

A=[AE1AEn]=[A1E1A1EnAnE1AnEn].

Thus, the elements of the matrix are

Ai,j=AiEj.

We then compute

(det A)ω(E1,,En)=(det A)cε1n(E1,,En)=c det A.

Likewise, since the i-th component of AEj is εi(AEj)=Ai(Ej)=Ai,j, we compute

ω(AE1,,AEn)=cε1n(AE1,,AEn)=c det (εi(Ej))=c det Ai,j.

Recall that, by definition, the determinant of a linear map A is defined to be the determinant of matrix representation of A with respect to any basis. Since all involved maps were multilinear, it is sufficient to consider only the basis vectors when determining equality. Thus, putting the previous equations together, we have determined that

ω(Av1,,Avn)=(det A)ω(v1,,vn).

In fact, this can be stipulated as the very definition of the determinant of a linear map: it is the unique real number satisfying this equation. This definition, while more abstract, is basis-independent.

The Exterior Product

Given a vector space V, the exterior product or wedge product of a k-covector ωΛk(V) and an l-covector ηΛl(V) is the (k+l)-covector defined as follows:

ωη=(k+l)!k!l!Alt(ωη).

The coefficient (k+l)!k!l! is motivated by two goals. First, as we discovered previously,

Alt(ω1ωk)(v1,,vk)=1k!det(ωi(vj)).

In this case, the coefficient becomes k!, which we used to define the elementary alternating tensors, and it cancels out, so that

ω1ωk(v1,,vk)=det(ωi(vj)).

For this reason, this definition is called the determinant convention.

Furthermore, we will discover that

εIεJ=εIJ,

where IJ=(i1,,ik,j1,,jl) is the concatenation of I=(i1,,ik) and J=(j1,,jl).

Since both εIεJ and εIJ are multilinear, it suffices to demonstrate that they are equal when applied to an arbitrary sequence of basis vectors (Ep1,,Epk+l), i.e. we want to show that

εIεJ(Ep1,,Epk+l)=εIJ(Ep1,,Epk+l).

If the multi-index P=(p1,,pkl) contains a repeated index, then, since each side is multilinear, each side equals 0.

If P contains any index that at least one of I or J does not contain, then, εIJ(Ep1,,Epk+l)=δJI=0 since P is not a permutation of IJ. If we expand εIεJ(Ep1,,Epk+l), we obtain

εIεJ(Ep1,,Epk+l)=(k+l)!k!l!Alt(εIεJ)(Ep1,,Epk+l)=1k!l!σSk+l(sgn σ)εI(Epσ(1),,Epσ(k))εJ(Epσ(k+1),,Epσ(k+l)),

and either (pσ(1),,pσ(k)) is not a permutation of I or (pσ(k+1),,pσ(k+l)) is not a permutation of J, so the term εI(Epσ(1),,Epσ(k))εJ(Epσ(k+1),,Epσ(k+l)) is equal to 0 also.

Now, if P=IJ and P contains no repeated indices, then the only terms in the above expanded sum that will be nonzero are those such that (pσ(1),,pσ(k))=τI for some permutation τSk and (pσ(k+1),,pσ(k+l))=ηJ for some permutation ηSl and σ=τ+η+, where

τ+(x)={τ(x)if 1xkxif k+1xk+l,

and

η+(x)={xif 1xkk+η(xk)if k+1xk+l.

It follows that sgn(τ+)=sgn(τ) and sgn(η+)=sgn(η) (since each respective pair performs the same number of transpositions) and thus sgn(σ)=(sgn τ)(sgn η). We can then rewrite the final term of the previous sum and expand as follows:

εIεJ(Ep1,,Epkl)=1k!l!τSk,ηSl(sgn τ)(sgn η)εI(Epτ(1),,Epτ(k))εJ(Epk+η(1),,Epk+η(l))=(1k!τSk(sgn τ)εI(Epτ(1),,Epτ(k)))(1l!ηSl(sgn η)εJ(Epk+η(1),,Epk+η(l)))=(Alt εI)(Ep1,,Epk)(Alt εI)(Epk+1,,Epk+l)=εI(Ep1,,Epk)εI(Epk+1,,Epk+l)=1.

Since also εIJ(Ep1,,Epk+l)=1, both sides are again equal.

Finally, if P=σIJ for some permutation σ, then both sides of the equation will be adjusted by sgn(σ), and so the equation will be preserved.

Properties of the Wedge Product

The exterior product satisfies several important properties. In practice, it is often sufficient to use these properties alone when working with exterior forms.

Bilinearity

For all aR,

  • (aω)η=a(ωη)
  • ω(aη)=a(ωη)
  • (ω+ω)η=(ωη)+(ωη)
  • ω(η+η)=(ωη)+(ωη)

This is a consequence of the fact that both the tensor product and the alternation operation are multilinear.

Associativity

The wedge product is associated, namely

(ωη)ξ=ω(ηξ).

Given any basis (εi) for V, we can express every exterior form ωΛk(V) as

ω=ωIεI,

where we use the summation convention and I ranges over Ik. We then compute

(ωη)ξ=(ωIεIηJεJ)ξKεK=ωIηJξK((εIεJ)εK)=ωIηJξK((εIJ)εK)=ωIηJξK(εIJK)=ωIηJξK(εIεJK)=ωIηJξK(εI(εJεK))=(ωIεI(ηJεJξKεK))=ω(ηξ).

Because the wedge product is associative, parentheses are usually omitted.

Anticommutativity

If ωΛk(V) and ηΛl(V), then

ωη=(1)klηω.

First, note that it is possible to permute a multi-index IJ into JI via a permutation τ that proceeds as follows:

i1,,ik,j1,,jli1,,j1,ik,,jli1,,j1,j2,ik,,jli1,,j1,,jl,ikj1,,jl,i1,,ik.

In other words, each of the k indices in I, beginning with the last and ending with the first, is moved, via a sequence of l adjacent transpositions, to the end of the multi-index, resulting in kl transpositions total. Thus, if we denote the composite permutation as τ, then sgn(τ)=(1)kl.

We compute

ωη=ωIεIηJεJ=ωIηJ(εIεJ)=ωIηJ(εIJ)=ωIηJ((sgn τ)εJI)=ωIηJ((sgn τ)εJεI)=(sgn τ)(ηJεJωIεI)=(sgn τ)(ηω)=(1)klηω.

Elementary k-Covectors

Given any basis (εi) for V and multi-index I=(i1,,ik), then

εi1εik=εI.

This can be demonstrated via induction on k, starting with k=1, in which case the equation reduces to the identity εi=εi. If the hypothesis holds for k=n for some n>1, then

(εi1εin)εin+1=εInεin+1=εIn+1.

Generalized Determinant

For any covectors ω1,,ωk and vectors v1,,vk,

ω1ωk(v1,,vk)=det(ωj(vi)).

Note that

ω1ωk(v1,,vk)=k!Alt(ω1ωk)(v1,,vk)=det(ωj(vi)).

Uniqueness of the Wedge Product

The wedge product is the unique operation satisfying the properties enumerated in the previous section. Suppose that there is another product satisfying the same properties, which we will denote as ¯. Then, since

εi1¯¯εik=εI,

and

εj1¯¯εjl=εJ,

it follows that

(εi1¯¯εik)¯(εj1¯¯εjl)=εIJ.

Then, we compute

ω¯η=ωIεI¯ηJεJ=ωIηJεI¯εJ=ωIηJεIJ=ωIηJεIεJ=ωIεIηJεJ=ωη.

Vector-Valued Exterior Forms

There is an isomorphism Λk(V)WAltk(V,W) witnessed by the following linear map:

φ(ωw)(v1,,vk)=ω(v1,,vk)w.

This map is only explicitly defined for "pure" tensors ωw, but, since such tensors span the tensor product space Λk(V)W, all tensors in Λk(V)W are linear combinations of such "pure" tensors, and thus the above definition is sufficient.

Given a basis (E1,,En) for V with dual basis (ε1,,εn) for V, the inverse map is

φ1(α)=IIkεIαI,

where

αI=α(Ei1,,Eik).

Since these terms are multilinear, it is sufficient to verify their application to an arbitrary sequence of basis vectors (Ej1,,Ejk) for a multi-index J=(j1,,jk):

φ(φ1(α))(Ej1,,Ejk)=φ(IIkεIαI)(Ej1,,Ejk)=[IIkφ(εIαI)](Ej1,,Ejk)=IIk[φ(εIαI)(Ej1,,Ejk)]=IIkεI(Ej1,,Ejk)αI=IIkδJIαI=αJ=α(Ej1,,Ejk).

Conversely, we compute

φ1(φ(ωw))=IIkεI(φ(ωw)(Ei1,,Eik))=IIkεI(ω(Ei1,,Eik)w)=IIkεI(ωIw)=IIk(ωIεIw)=(IIkωIεI)w=ωw.

Note that, although a basis was used to exhibit an inverse function, the isomorphism φ itself is canonical (basis-independent).

Recall that, given a basis (EiV) for V with dual basis εVI) for V and a basis (EiW) for W, the tensor product space Λk(V)W has basis

{εVIEiWIIk,1idim(W)},

and each tensor αΛk(V)W can be written as

α=αIi(εVIEiW),

where

αIi=α(εVI,EiW).

It then follows that

α=αIi(εVIEiW)=(αIiεVI)EiW,

and, thus, if we define functions

αi=αIiεVI,

then

α=αiEiW.

Thus, every element of Λk(V)W can be written in this form, which is analogous to a linear combination. Using the isomorphism, we can define a multilinear map αiEiW as follows:

αiEiW(v1,,vk)=φ(αiEiW)(v1,,vk)=αi(v1,,vk)EiW.

Thus, since the map α can be written in terms of unique component functions α=αiEiW, this implies that the αi as defined above coincide with the component functions. We can also prove this equivalence by defining αi to be the component functions and then working in the opposite direction:

φ1(α)=φ1(α)=φ1(αiEiW)=εI(αiEiW)I=εI(αiEiW(Ei1,,Eik))=εI(αi(Ei1,,Eik)EiW)=εI(αIiEiW)=αIi(εIEiW).

Then, since the coefficients αIi are unique, it follows that αi(Ei1V,,EikV) = αIi, and the function αi=αIiεWI is the same as the coordinate function αi, which can be represented as αi(Ei1V,,EikV)εWI.

Thus, whether we start with tensor components or the component functions, the functions αi defined coincide regardless.

We may now state the definition of a vector-valued exterior form. A vector-valued exterior form is equivalently any of the following:

  • An element of Λk(V)W; each such element is expressible as αiEiW for αiΛk(V).
  • An element of Altk(V,W); each such element is expressible as αiEiW for αiΛk(V).

Thus, a vector-valued exterior form is just a collection of n scalar-valued exterior forms, arranged into a "linear combination" with a given selection of basis vectors.

The Exterior Algebra

We will only briefly mention the related algebraic constructions.

The space Λk(V) is called the k-th exterior power of V. We have only given a concrete description for this space in terms of alternating, multi-linear maps. It is possible to define the k-th exterior power for arbitrary vector spaces, using the same techniques used to define abstract tensor product spaces. We will not describe the abstract construction in this post. Each of the exterior powers can be joined together into a vector space Λ(V), called the exterior algebra of V:

Λ(V)=k=0nΛk(V).

This space is also an associative algebra. This algebra is a graded, anticommutative algebra.