Exterior Algebra

This post examines the concept of the exterior algebra of a vector space.

Exterior Algebra
An artistic depiction of the exterior algebra, generated by Google Gemini using the prompt "generate an artistic depiction of the exterior algebra of a vector space".

This post describes the concept of the exterior algebra of a vector space. One of the primary applications of exterior algebra occurs in the theory of differential forms.

Alternating Tensors

Here we will consider (concrete, covariant) \(k\)-tensors, i.e. multilinear maps \(\alpha : V_1 \times \dots \times V_k \rightarrow \mathbb{R}\).

A \(k\)-tensor is alternating if its value changes sign whenever any two of its arguments are interchanged:

\[\alpha(v_1,\dots,v_i,\dots,v_j,\dots,v_k) = -\alpha(v_1,\dots,v_j,\dots,v_i,\dots,v_k).\]

First, we want to show that there are several equivalent ways to characterize alternating tensors.

We define the index set consisting of \(k\) indices to be the set

\[I_k = \{ 1, \dots, k\}.\]

A permutation of an index set \(I_k\) is an automorphism of \(I_k\), i.e. a map \(\sigma : I_k \rightarrow I_k\) such that there exists an inverse map \(\sigma^{-1} : I_k \rightarrow I_k\) such that \(\sigma \circ \sigma^{-1} = \sigma^{-1} \circ \sigma = \mathrm{Id}_{I_k}\). The set of all such permutations comprises a group \(S_k\) called the permutation group on \(k\) elements.

The transposition (interchange) of two indices \(1 \leq i \leq k\) and \(1 \leq j \leq k\) is the permutation \(\tau_{i,j} \in S_k\) defined as follows:

\[\tau_{i,j}(x) = \begin{cases}j, & \text{if}~ x = i\\i, & \text{if}~ x = j\\ x, & \text{otherwise}.\end{cases}\]

Note that \(\tau_{i,j} = \tau_{j,i}\) and \(\tau_{i,j} \circ \tau_{i,j} = \mathrm{Id}_I\).

We want to show that every permutation is the composition of a finite sequence of transpositions. We proceed by induction on the size of the index set \(I_k\). For \(I_0 = \emptyset\), it is vacuously true that all permutations (of which there are none) on this set are the composition of a finite sequence of interchanges. Next, suppose that, for some natural number \(k\), every permutation on \(I_k\) is the composition of a finite sequence of interchanges, and consider an automorphism \(\sigma : I_{k+1} \rightarrow I_{k+1}\). Note that \(\tau_{k+1,\sigma(k+1)} \circ \sigma\) is again an automorphism of \(I_{k+1}\), since it has inverse \(\sigma^{-1} \circ \tau_{k+1,\sigma(k+1)}\). Now, \(\tau_{k+1,\sigma(k+1)} \circ \sigma(k+1) = k+1\), i.e. \(k+1\) is a fixed point, which implies that its restriction to \(I_k\) is an automorphism of \(I_k\), since automorphisms are injective and thus the only index that maps to \(k+1\) is \(k+1\) itself. By inductive hypothesis, this restriction is the composition of a finite sequence of interchanges, and thus so is \(\tau_{k+1,\sigma(k+1)} \circ \sigma\), since \(k+1\) is a fixed point and hence it performs no additional interchanges. Finally, \(\sigma = \tau_{k+1,\sigma(k+1)} \circ \tau_{k+1,\sigma(k+1)} \circ \sigma\), and hence is the composition of a finite sequence of interchanges.

A permutation is said to be even if it is the composition of an even number of transpositions, and odd if it is the composition of an odd number of transpositions.

The sign of the permutation is the following function:

\[\mathrm{sgn}(\sigma) = \begin{cases}+1, & \text{if \(\sigma\) is even}\\-1, & \text{if \(\sigma\) is odd}.\end{cases}\]

We can apply a permutation \(\sigma \in I_k\) to a \(k\)-tensor \(\alpha\) to obtain a new \(k\)-tensor \(\sigma * \alpha\) as follows:

\[\sigma * \alpha(v_1, \dots, v_k) = \alpha(v_{\sigma(1)},\dots,v_{\sigma(k)}).\]

We can thus define an alternating \(k\)-tensor as one that satisfies

\[\alpha = -\tau_{i,j} * \alpha\]

for all transpositions \(\tau_{i,j}\).

We want to show that alternating \(k\)-tensors are equivalently characterized as those tensors such that any permutation of their arguments causes the value to be multiplied by the sign of the permutation, i.e.

\[\alpha = (\mathrm{sgn}(\sigma))(\sigma * \alpha).\]

Since every permutation is the composition of a finite sequence of transpositions, we will proceed by induction on the length \(n\) of the sequence of transpositions. If \(n=0\), i.e. when no transposition is performed and hence the permutation is the trivial permutation (the identity map \(\mathrm{Id}_{I_k}\)), then \(\mathrm{sgn}(\mathrm{Id}_{I_k}) = 1\) and \(\alpha = \sigma * \alpha\) and so \(\alpha = (\mathrm{sgn}(\sigma))(\sigma * \alpha)\). Now, suppose that the hypothesis holds for every permutation \(\sigma \in S_k\) of length \(n\), and consider a permutation \(\sigma' \in S_k\) of length \(n+1\). Then \(\sigma' = \tau_{i,j} \circ \sigma\) for some permutation \(\sigma \in S_k\) of length \(n\) and transposition \(\tau_{i,j}\) of indices \(i\) and \(j\). It then follows that

\begin{align}\alpha\left(v_{\sigma'(1)},\dots,v_{\sigma'(k)}\right) &= \alpha\left(v_{\tau_{i,j}(\sigma(1))},\dots,v_{\tau_{i,j}(\sigma(k))}\right) & \text{By Definition} \\&= -\alpha\left(v_{\sigma(1)},\dots,v_{\sigma(k)}\right) & \text{Since \(\alpha\) is alternating} \\&= -\mathrm{sgn}(\sigma)\alpha(v_1,\dots,v_k) & \text{By Hypothesis} \\&= \mathrm{sgn}(\sigma')\alpha(v_1,\dots,v_k) & \text{Since \(\mathrm{sgn}(\sigma) = -\mathrm{sgn}(\sigma')\)}\end{align}

Finally note that the converse is immediate: if \(\alpha = (\mathrm{sgn}(\sigma))(\sigma * \alpha)\), then

\[\alpha = (\mathrm{sgn}(\tau_{i,j}))(\tau_{i,j} * \alpha) = -\tau_{i,j} * \alpha,\]

which is the very definition of an alternating \(k\)-tensor.

Recall that, for any \(k\)-tensor \(\alpha\) on a vector space \(V\), given a basis \((E_i)\) for \(V\), the components of the tensor relative to this basis are computed as

\[\alpha_{i_1,\dots,i_k} = \alpha\left(E_{i_1},\dots,E_{i_k}\right).\]

Thus, whenever any two indices of the coordinates are interchanged, the tensor changes sign, since this corresponds to the interchange of two of the basis vectors. This provides yet another characterization of alternating tensors.

Next, note that, if \(\alpha\) is an alternating \(k\)-tensor and two of its arguments are equal, then by interchanging the equal arguments we obtain

\[\alpha(v_1,\dots,w,\dots,w,\dots,v_k) = -\alpha(v_1,\dots,w,\dots,w,\dots,v_k)\]

which implies that \(\alpha(v_1,\dots,w,\dots,w,\dots,v_k) = 0\).

Likewise, if, whenever the arguments are linearly dependent, \(\alpha(v_1,\dots,v_k) = 0\), then, since any sequence of vectors with a repeated vector is linearly dependent, it follows that \(\alpha(v_1,\dots,w,\dots,w,\dots,v_k) = 0\).

Now, if \(\alpha\) yields \(0\) whenever two of its arguments are equal, then

\begin{align}0 &= \alpha(v_1,\dots,v_i+v_j,\dots,v_i+v_j,\dots,v_k) \\&= \alpha(v_1,\dots,v_i,\dots,v_i,\dots,v_k) \\&+ \alpha(v_1,\dots,v_i,\dots,v_j,\dots,v_k) \\&+ \alpha(v_1,\dots,v_j,\dots,v_i,\dots,v_k) \\&+ \alpha(v_1,\dots,v_j,\dots,v_j,\dots,v_k) \\&= \alpha(v_1,\dots,v_i,\dots,v_j,\dots,v_k) \\&+ \alpha(v_1,\dots,v_j,\dots,v_i,\dots,v_k).\end{align}

Thus, \(\alpha\) is alternating.

Finally, suppose that \(\alpha\) yields \(0\) whenever two of its arguments are equal. If \((v_1,\dots,v_k)\) is a linearly dependent sequence of vectors, then one can be written as a linear combination of the others. Without loss of generality, we may assume that the \(v_k\) may be written as a linear combination of the other vectors (since the sequence may be reordered without changing the linear combination). This means that

\[v_k = \sum_{i=1}^{k-1}a^iv_i.\]

Then

\begin{align}\alpha(v_1,\dots,v_k) &= \alpha\left(v_1,\dots,v_{k-1},\sum_{i=1}^{k-1}a^iv_i\right) \\&= \sum_{i=1}^{k-1}a^i\alpha(v_1,\dots,v_{k-1},v_i) & \text{Since \(\alpha\) is multilinear}.\end{align}

In each of the summands, there is a repeated argument (namely, the argument \(v_i\)). Thus, each summand is equal to \(0\) by hypothesis. Thus, \(\alpha\) is alternating.

Altogether, this demonstrates that each of the following conditions is equivalent to the others:

  • \(\alpha\) is alternating
  • \(\alpha\) changes sign according to the sign of any permutation applied to it
  • \(\alpha\) yields \(0\) whenever its arguments are linearly dependent
  • \(\alpha\) yields \(0\) whenever any two of its arguments are equal

The Alternation Operation

The vector space of all alternating covariant \(k\)-tensors on a vector space \(V\) is denoted \(\Lambda^k(V^*)\). We can define an operation \(\mathrm{Alt} : T^k(V^*) \rightarrow \Lambda^k(V^*)\) called alternation which derives a corresponding alternating tensor from an arbitrary tensor as follows:

\[\mathrm{Alt}(\alpha) = \frac{1}{k!}\sum_{\sigma \in S_k}(\mathrm{sgn}(\sigma))(\sigma * \alpha).\]

The permutation group \(S_k\) on \(k\) elements contains exactly \(k!\) permutations, i.e. \(\lvert S_k \rvert = k!\). The alternation operation computes the sign-adjusted average of the tensor applied at every permutation of its arguments.

We want to confirm that \(\mathrm{Alt}(\alpha)\) is indeed alternating.

First, we will demonstrate a small technical lemma. Given any fixed permutation \(\tau \in S_k\), for each permutation \(\eta \in S_k\), we can find a permutation \(\sigma \in S_k\) such that \(\eta = \sigma \circ \tau\), namely, the permutation \(\sigma = \eta \circ \tau^{-1}\). Likewise, the composition of two permutations is again a permutation. Together, these facts establish the following equality:

\[\{ \sigma \circ \tau \mid \sigma \in S_k\} = S_k.\]

Also note that if \(\eta = \sigma \circ \tau\), then \(\mathrm{sgn}~\eta = (\mathrm{sgn}~\sigma)(\mathrm{sgn}~\tau)\). It follows that \(\mathrm{sgn}~\sigma = (\mathrm{sgn}~\tau)(\mathrm{sgn}~\eta)\).

We then compute for any permutation \(\tau \in S_k\)

\begin{align}(\mathrm{Alt}~\alpha)\left(v_{\tau(1)},\dots,v_{\tau(k)}\right) &= \frac{1}{k!}\sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(\sigma * \alpha)\left(v_{\tau(1)},\dots,v_{\tau(k)}\right)\\&= \frac{1}{k!}\sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)\alpha\left(v_{\sigma\tau(1)},\dots,v_{\sigma\tau(k)}\right) \\&= \frac{1}{k!}\sum_{\eta \in S_k}(\mathrm{sgn}~\tau)(\mathrm{sgn}~\eta)\alpha\left(v_{\eta(1)},\dots,v_{\eta(k)}\right) \\&= (\mathrm{sgn}~\tau)(\mathrm{Alt}~\alpha)\left(v_1,\dots,v_k\right).\end{align}

This proves that \(\mathrm{Alt}~\alpha\) is alternating, since, for any permutation \(\tau\),

\[\mathrm{Alt}~\alpha = (\mathrm{sgn}~\tau)(\tau * \mathrm{Alt}~\alpha).\]

Now, if \(\mathrm{Alt}~\alpha = \alpha\), then, since \(\mathrm{Alt}~\alpha\) is alternating, it follows that \(\alpha\) is alternating. Conversely, if \(\alpha\) is alternating, then, for every permutation \(\sigma \in S_k\), \(\alpha = (\mathrm{sgn}~\sigma)(\sigma * \alpha)\), and hence

\begin{align}\mathrm{Alt}~\alpha &= \frac{1}{k!}\sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(\sigma * \alpha) \\&= \frac{1}{k!}\sum_{\sigma \in S_k}\alpha \\&= \alpha.\end{align}

Thus, \(\mathrm{Alt}~\alpha = \alpha\) if and only if \(\alpha\) is alternating. This provides another characterization of alternating tensors: they are fixed points of the alternation operation.

Elementary Alternating Tensors

Consider an important example of the alternation operation. Using the vector space \(\mathbb{R}^k\) and the standard dual basis \((e^i)\), we can define the following \(k\)-tensor:

\[k!e^1 \otimes \dots \otimes e^k.\]

The determinant \(\mathrm{det}(v_1,\dots,v_k)\) of a sequence of column vectors comprising a matrix \(A = [v_1 \dots v_k]\) with elements \(a_{i,j} = v_j^i\) (i.e. the \(i\)-th component of the \(j\)-th vector) is then the alternation of this tensor:

\begin{align}\mathrm{det}(v_1,\dots,v_k) &= \mathrm{Alt}~(k!e^1 \otimes \dots \otimes e^k)(v_1,\dots,v_k) \\&= \frac{1}{k!}\sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(k!e^1 \otimes \dots \otimes e^k)(v_{\sigma(1)},\dots,v_{\sigma(k)}) \\&= \sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(e^1 \otimes \dots \otimes e^k)(v_{\sigma(1)},\dots,v_{\sigma(k)}) \\&= \sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(e^1(v_{\sigma(1)}) \dots e^k(v_{\sigma(k)})) \\&= \sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(v_{\sigma(1)}^1 \dots v_{\sigma(k)}^k) \\&= \sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)\prod_{i=1}^k a_{i,\sigma(i)}.\end{align}

The ultimate expression is precisely the Leibniz formula for \(\mathrm{det}(A)\).

More generally, we obtain the following:

\begin{align}\mathrm{Alt}~(k!\omega^1 \otimes \dots \otimes \omega^k)(v_1,\dots,v_k) &= \frac{1}{k!}\sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(k!\omega^1 \otimes \dots \otimes \omega^k)(v_{\sigma(1)},\dots,v_{\sigma(k)}) \\&= \sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(\omega^1 \otimes \dots \otimes \omega^k)(v_{\sigma(1)},\dots,v_{\sigma(k)}) \\&= \sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)(\omega^1(v_{\sigma(1)}) \dots \omega^k(v_{\sigma(k)})) \\&= \sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)\prod_{i=1}^k \omega^i(v_{\sigma(i)}) \\&= \mathrm{det}(\omega^i(v_j)).\end{align}

We will generalize this correspondence to arbitrary vector spaces, obtaining a generalization of the determinant. First, we introduce some notation.

For each positive integer \(k\), a multi-index of length \(\mathbf{k}\) is an ordered tuple consisting of \(k\) indices (positive integers):

\[I = (i_1,...,i_k).\]

We can apply a permutation \(\sigma \in S_k\) to \(I\) to contain a new multi-index of length \(k\):

\[\sigma * I = (\sigma(i_1),\dots,\sigma(i_k)).\]

For any \(n\)-dimensional vector space \(V\) with dual basis \((\varepsilon^i)\) for \(V^*\), given any multi-index \(I = (i_1,\dots,i_k)\) of length \(k\) where \(1 \leq k \leq n\), we can define a \(k\)-tensor \(\varepsilon^I\) (also written \(\varepsilon^{i_1 \dots i_k}\)) as follows:

\[\varepsilon^I = k!\mathrm{Alt}\left(\varepsilon^{i_1} \otimes \dots \otimes \varepsilon^{i_k}\right).\]

As we previously established, this is equivalent to the following definition:

\[\varepsilon^I(v_1,\dots,v_k) = \mathrm{det}\begin{bmatrix}\varepsilon^{i_1}(v_1) & \dots & \varepsilon^{i_1}(v_k) \\ \vdots & \ddots & \vdots \\ \varepsilon^{i_k}(v_1) & \dots & \varepsilon^{i_k}(v_k)\end{bmatrix} = \mathrm{det}\begin{bmatrix}v^{i_1}_1 & \dots & v^{i_1}_k \\ \vdots & \ddots & \vdots \\ v^{i_k}_1 & \dots & v^{i_k}_k\end{bmatrix}.\]

This generalizes the determinant operation in several ways: first, it applies to any real vector space and not just \(\mathbb{R}^n\). Second, it applies to the components of vectors relative to any basis. Third, it applies to any multi-index, not just the contiguous multi-index \((1,\dots,k)\). Finally, it also permits multi-indices whose length is less than the dimension of the vector space.

Recall that if \(\eta = \sigma \circ \tau\), then \(\mathrm{sgn}~\eta = (\mathrm{sgn}~\sigma)(\mathrm{sgn}~\tau)\) and thus it follows that \(\mathrm{sgn}~\sigma = (\mathrm{sgn}~\tau)(\mathrm{sgn}~\eta)\). Likewise, recall that \(\{\sigma \circ \tau \mid \sigma \in S_k\} = S_k\).

If \(I = \tau * J\) for some multi-indices \(I\) and \(J\) of length \(k\) and permutation \(\tau \in S_k\), then:

\begin{align}\varepsilon^{I} &= \varepsilon^{\tau * J} \\&= k!\mathrm{Alt}\left(\varepsilon^{\tau(j_1)} \otimes \dots \otimes \varepsilon^{\tau(j_k)}\right) \\&= \sum_{\sigma \in S_k}(\mathrm{sgn}~\sigma)\varepsilon^{\sigma\tau(j_1)} \otimes \dots \otimes \varepsilon^{\sigma\tau(j_k)} \\&= \sum_{\eta \in S_k}(\mathrm{sgn}~\tau)(\mathrm{sgn}~\eta)\varepsilon^{\eta(j_1)} \otimes \dots \otimes \varepsilon^{\eta(j_k)} \\&= (\mathrm{sgn}~\tau)\sum_{\eta \in S_k}(\mathrm{sgn}~\eta)\varepsilon^{\eta(j_1)} \otimes \dots \otimes \varepsilon^{\eta(j_k)} \\&= (\mathrm{sgn}~\tau)k!\mathrm{Alt}\left(\varepsilon^{\eta(j_1)} \otimes \dots \otimes \varepsilon^{\eta(j_k)}\right) \\&= (\mathrm{sgn}~\tau)\varepsilon^J.\end{align}

Thus, \(\varepsilon^{\tau * J} = (\mathrm{sgn}~\tau)\varepsilon^J\).

It then follows that, if \(I\) has any repeated indices (say, at indices, \(i\) and \(j\)), then \(\varepsilon^I = \varepsilon^{\tau_{i,j} * I} = (\mathrm{sgn}~\tau_{i,j})\varepsilon^I = -\varepsilon^I\) for the transposition \(\tau_{i,j}\), and hence \(\varepsilon^I = 0\).

Given a pair of multi-indices \(I = (i_1,\dots,i_k)\) and \(J = (j_1,\dots,j_k)\) both of length \(k\) and a vector space \(V\) with basis \((E_{j_1},\dots,E_{j_k})\), consider the application of \(\varepsilon^I\) to these basis vectors:

\[\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right).\]

Note that \(\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right)\) if \(I = J\) since \(\varepsilon^i(E_i) = 1\) by definition, and this becomes the determinant of the identity matrix, which is \(1\).

If there are any repeated indices in \(J\), then, since \(\varepsilon^I\) is alternating, \(\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right) = 0\).

If neither \(I\) nor \(J\) contain repeated any repeated index and \(J = \sigma * I\) for some permutation \(\sigma \in S_k\), then

\begin{align}\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right) &= \varepsilon^{\sigma * J}\left(E_{j_1},\dots,E_{j_k}\right) \\&= (\mathrm{sgn}~\sigma)\varepsilon^J\left(E_{j_1},\dots,E_{j_k}\right) \\&= (\mathrm{sgn}~\sigma).\end{align}

Note that, if \(J \) is not a permutation of \(I\), then, regardless of the permutation \(\sigma\), there will always be at least one term where \(i_n \neq \sigma(j_n)\) and thus \(\varepsilon^{i_n}\left(E_{\sigma(j_n)}\right) = 0\), which means that \(\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right) = 0\).

Thus, we have established that

\[\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right) = \begin{cases}\mathrm{sgn}~\sigma & \text{if neither \(I\) nor \(J\) contains any repeated index and \(J = \sigma * I\) for some \(\sigma \in S_k\)},\\0 & \text{if either \(I\) or \(J\) contains any repeated index or \(J\) is not a permutation of \(I\)}.\end{cases}\]

Thus, as we will show below, \(\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right) \) is similar to an alternating version of the Kronecker delta.

A multi-index \(I = (i_1,\dots,i_k)\) is increasing if \(i_1 \lt \dots \lt \dots i_k\).

Note that, given an arbitrary multi-index \(J\) of length \(k\), there can be at most one increasing multi-index \(I\) of length \(k\) such that \(J\) is a permutation of \(I\).

We denote the set of all increasing multi-indices of length \(k\) as \(\mathcal{I}_k\). Given any vector space \(V\), basis \((E_i)\) for \(V\), \(\alpha \in \Lambda^k(V^*)\), and any multi-index \(I\) whatsoever, we define a term as follows:

\[\alpha_I = \alpha\left(E_{i_1},\dots,E_{i_k}\right).\]

Now, given any multi-index \(J\) of length \(k\), if \(J\) contains any repeated index, then

\begin{align}\alpha_I\varepsilon^I(E_{j_1},\dots,E_{j_k}) &= \alpha_I \cdot 0 \\&= 0 \\&= \alpha_J \\&= \alpha\left(E_{j_1},\dots,E_{j_k}\right).\end{align}

If \(J\) does not contain any repeated index, then if \(J = \sigma * I\) for some \(\sigma \in S_k\), it follows that

\begin{align}\alpha_I\varepsilon^I(E_{j_1},\dots,E_{j_k}) &= \alpha_I(\mathrm{sgn}~\sigma) \\&= \alpha_J \\&= \alpha\left(E_{j_1},\dots,E_{j_k}\right).\end{align}

The penultimate equation is due to the fact that \(\alpha\) is multilinear.

Thus, \(\varepsilon^I(E_{j_1},\dots,E_{j_k})\) is formally similar to the Kronecker delta, and the following notation is often used:

\[\delta^I_J = \varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right).\]

The similarity arises because, as we just established,

\[\alpha_I\delta^I_J = \begin{cases}\alpha_J & \text{if \(J\) is a permutation of \(I\)}\\ 0 & \text{otherwise}.\end{cases}\]

Combining this with the fact that there is a unique increasing permutation of \(J\), we obtain the following:

\begin{align}\sum_{I \in \mathcal{I}_k}\alpha_I \varepsilon^I(E_{j_1},\dots,E_{j_k}) &= \sum_{I \in \mathcal{I}_k}\alpha_I \delta^I_J \\&= \alpha_J \\&= \alpha\left(E_{j_1},\dots,E_{j_k}\right).\end{align}

Since \(\alpha\) is multilinear, it is sufficient to consider its action on basis vectors. Thus, we have shown that

\[\alpha = \sum_{I \in \mathcal{I}_k}\alpha_I\varepsilon^I.\]

Thus, \(\alpha\) is a linear combination of such terms, and the set

\[\mathcal{E}_k = \left\{\varepsilon^I : I \in \mathcal{I}_k \right\}\]

comprises a spanning set for \(\Lambda^k(V^*)\).

Now, if

\[\sum_{I \in \mathcal{I}_k}\alpha_I\varepsilon^I = 0,\]

then, for any increasing multi-index \(J\), applying both sides to the basis vectors, we obtain

\begin{align}0 &= \sum_{I \in \mathcal{I}_k}\alpha_I\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right) \\&= \sum_{I \in \mathcal{I}_k}\alpha_I \delta^I_J \\&= \alpha_J.\end{align}

Thus, since every coefficient is of the form \(\alpha_J\) for some increasing multi-index \(J\), it follows that every coefficient is \(0\). Thus, the set \(\mathcal{E}_k\) is a basis for \(\Lambda^k(V^*)\).

Summation Convention

We will sometimes be explicit about writing summations over \(\mathcal{I}_k\), as in

\[\sum_{I \in I_k}\alpha_I\varepsilon^I.\]

However, often, we will use the usual summation convention (that an "upper" and "lower" index in the same expression indicates an implicit summation), and simply write

\[\alpha_I\varepsilon^I\]

to indicate a sum over increasing multi-indexes.

The Determinant

For an \(n\)-dimensional vector space \(V\), \(\Lambda^n(V^*)\) has basis \(\mathcal{E}_n\) which contains only one element since there is a unique increasing multi-index of length \(n\), namely \((1,\dots,n)\). Thus, every element \(\omega \in \Lambda^n(V^*)\) is of the form

\[\omega = c\varepsilon^{1 \dots n}\]

for some \(c \in \mathbb{R}\). Recall that, given any linear map \(A : V \rightarrow V\) and basis \((E_i)\) for \(V\) with corresponding dual basis \((\varepsilon^i)\) for \(V^*\), we can define component functions \(A^i : V \rightarrow \mathbb{R}\) relative to this basis as \(A(v) = A^i(v)E_i\), and the map \(A\) can be represented as a matrix as follows:

\[A = \begin{bmatrix}AE_1 & \dots & AE_n\end{bmatrix} = \begin{bmatrix}A^1E_1 & \dots & A^1E_n\\ \vdots & \ddots & \vdots\\ A^nE_1 & \dots & A^nE_n\end{bmatrix}.\]

Thus, the elements of the matrix are

\[A_{i,j} = A^iE_j.\]

We then compute

\begin{align}(\mathrm{det}~A)\omega\left(E_1,\dots,E_n\right) &= (\mathrm{det}~A)c\varepsilon^{1 \dots n}\left(E_1,\dots,E_n\right) \\&= c~ \mathrm{det}~A.\end{align}

Likewise, since the \(i\)-th component of \(AE_j\) is \(\varepsilon^i(AE_j) = A^i(E_j) = A_{i,j}\), we compute

\begin{align}\omega\left(AE_1,\dots,AE_n\right) &= c\varepsilon^{1 \dots n}\left(AE_1,\dots,AE_n\right) \\&= c~\mathrm{det}~\left(\varepsilon^i(E_j)\right) \\&= c~\mathrm{det}~A_{i,j}.\end{align}

Recall that, by definition, the determinant of a linear map \(A\) is defined to be the determinant of matrix representation of \(A\) with respect to any basis. Since all involved maps were multilinear, it is sufficient to consider only the basis vectors when determining equality. Thus, putting the previous equations together, we have determined that

\[\omega\left(Av_1,\dots,Av_n\right) = (\mathrm{det}~A)\omega(v_1,\dots,v_n).\]

In fact, this can be stipulated as the very definition of the determinant of a linear map: it is the unique real number satisfying this equation. This definition, while more abstract, is basis-independent.

The Exterior Product

Given a vector space \(V\), the exterior product or wedge product of a \(k\)-covector \(\omega \in \Lambda^k(V^*)\) and an \(l\)-covector \(\eta \in \Lambda^l(V^*)\) is the \((k+l)\)-covector defined as follows:

\[\omega \wedge \eta = \frac{(k+l)!}{k!l!}\mathrm{Alt}(\omega \otimes \eta).\]

The coefficient \(\frac{(k+l)!}{k!l!}\) is motivated by two goals. First, as we discovered previously,

\[\mathrm{Alt}(\omega^1 \otimes \dots \otimes \omega^k)(v_1,\dots,v_k) = \frac{1}{k!}\mathrm{det}\left(\omega^i(v_j)\right).\]

In this case, the coefficient becomes \(k!\), which we used to define the elementary alternating tensors, and it cancels out, so that

\[\omega^1 \wedge \dots \wedge \omega^k(v_1,\dots,v_k) = \mathrm{det}\left(\omega^i(v_j)\right).\]

For this reason, this definition is called the determinant convention.

Furthermore, we will discover that

\[\varepsilon^I \wedge \varepsilon^J = \varepsilon^{IJ},\]

where \(IJ = (i_1,\dots,i_k,j_1,\dots,j_l)\) is the concatenation of \(I = (i_1,\dots,i_k)\) and \(J = (j_1,\dots,j_l)\).

Since both \(\varepsilon^I \wedge \varepsilon^J\) and \(\varepsilon^{IJ}\) are multilinear, it suffices to demonstrate that they are equal when applied to an arbitrary sequence of basis vectors \(\left(E_{p_1},\dots,E_{p_{k+l}}\right)\), i.e. we want to show that

\[\varepsilon^I \wedge \varepsilon^J\left(E_{p_1},\dots,E_{p_{k+l}}\right) = \varepsilon^{IJ}\left(E_{p_1},\dots,E_{p_{k+l}}\right).\]

If the multi-index \(P = (p_1,\dots,p_{k_l})\) contains a repeated index, then, since each side is multilinear, each side equals \(0\).

If \(P\) contains any index that at least one of \(I\) or \(J\) does not contain, then, \(\varepsilon^{IJ}\left(E_{p_1},\dots,E_{p_{k+l}}\right) = \delta^I_J = 0\) since \(P\) is not a permutation of \(IJ\). If we expand \(\varepsilon^I \wedge \varepsilon^J\left(E_{p_1},\dots,E_{p_{k+l}}\right)\), we obtain

\begin{align}\varepsilon^I \wedge \varepsilon^J\left(E_{p_1},\dots,E_{p_{k+l}}\right) &= \frac{(k+l)!}{k!l!}\mathrm{Alt}(\varepsilon^I \wedge \varepsilon^J)\left(E_{p_1},\dots,E_{p_{k+l}}\right) \\&= \frac{1}{k!l!}\sum_{\sigma \in S_{k+l}}(\mathrm{sgn}~\sigma)\varepsilon^I\left(E_{p_{\sigma(1)}},\dots,E_{p_{\sigma(k)}}\right)\varepsilon^J\left(E_{p_{\sigma(k+1)}},\dots,E_{p_{\sigma(k+l)}}\right),\end{align}

and either \((p_{\sigma(1)},\dots,p_{\sigma(k)})\) is not a permutation of \(I\) or \((p_{\sigma(k+1)},\dots,p_{\sigma(k+l)})\) is not a permutation of \(J\), so the term \(\varepsilon^I\left(E_{p_{\sigma(1)}},\dots,E_{p_{\sigma(k)}}\right)\varepsilon^J\left(E_{p_{\sigma(k+1)}},\dots,E_{p_{\sigma(k+l)}}\right)\) is equal to \(0\) also.

Now, if \(P = IJ\) and \(P\) contains no repeated indices, then the only terms in the above expanded sum that will be nonzero are those such that \((p_{\sigma(1)},\dots,p_{\sigma(k)}) = \tau * I\) for some permutation \(\tau \in S_k\) and \((p_{\sigma(k+1)},\dots,p_{\sigma(k+l)}) = \eta * J\) for some permutation \(\eta \in S_l\) and \(\sigma = \tau^+\eta^+\), where

\[\tau^+(x) = \begin{cases} \tau(x) & \text{if \(1 \leq x \leq k\)} \\ x & \text{if \(k+1 \leq x \leq k+l\),}\end{cases}\]

and

\[\eta^+(x) = \begin{cases} x & \text{if \(1 \leq x \leq k\)} \\ k+\eta(x-k) & \text{if \(k+1 \leq x \leq k+l\).}\end{cases}\]

It follows that \(\mathrm{sgn}(\tau^+) = \mathrm{sgn}(\tau)\) and \(\mathrm{sgn}(\eta^+) = \mathrm{sgn}(\eta)\) (since each respective pair performs the same number of transpositions) and thus \(\mathrm{sgn}(\sigma) = (\mathrm{sgn}~\tau)(\mathrm{sgn}~\eta)\). We can then rewrite the final term of the previous sum and expand as follows:

\begin{align}\varepsilon^I \wedge \varepsilon^J\left(E_{p_1},\dots,E_{p_{k_l}}\right) &= \frac{1}{k!l!}\sum_{\tau \in S_k, \eta \in S_l}(\mathrm{sgn}~\tau)(\mathrm{sgn}~\eta)\varepsilon^I\left(E_{p_{\tau(1)}},\dots,E_{p_{\tau(k)}}\right)\varepsilon^J\left(E_{p_{k+\eta(1)}},\dots,E_{p_{k+\eta(l)}}\right) \\&= \left(\frac{1}{k!}\sum_{\tau \in S_k}(\mathrm{sgn}~\tau)\varepsilon^I\left(E_{p_{\tau(1)}},\dots,E_{p_{\tau(k)}}\right)\right) \cdot \left(\frac{1}{l!}\sum_{\eta \in S_l}(\mathrm{sgn}~\eta)\varepsilon^J\left(E_{p_{k+\eta(1)}},\dots,E_{p_{k+\eta(l)}}\right)\right) \\&= \left(\mathrm{Alt}~\varepsilon^I\right)\left(E_{p_1},\dots,E_{p_k}\right) \left(\mathrm{Alt}~\varepsilon^I\right)\left(E_{p_{k+1}},\dots,E_{p_{k+l}}\right) \\&= \varepsilon^I\left(E_{p_1},\dots,E_{p_k}\right) \varepsilon^I\left(E_{p_{k+1}},\dots,E_{p_{k+l}}\right) \\&= 1.\end{align}

Since also \(\varepsilon^{IJ}(E_{p_1},\dots,E_{p_{k+l}}) = 1\), both sides are again equal.

Finally, if \(P = \sigma * IJ\) for some permutation \(\sigma\), then both sides of the equation will be adjusted by \(\mathrm{sgn}(\sigma)\), and so the equation will be preserved.

Properties of the Wedge Product

The exterior product satisfies several important properties. In practice, it is often sufficient to use these properties alone when working with exterior forms.

Bilinearity

For all \(a \in \mathbb{R}\),

  • \((a\omega) \wedge \eta = a(\omega \wedge \eta)\)
  • \(\omega \wedge (a\eta) = a(\omega \wedge \eta)\)
  • \((\omega + \omega') \wedge \eta = (\omega \wedge \eta) + (\omega' \wedge \eta)\)
  • \(\omega \wedge (\eta + \eta') = (\omega \wedge \eta) + (\omega \wedge \eta')\)

This is a consequence of the fact that both the tensor product and the alternation operation are multilinear.

Associativity

The wedge product is associated, namely

\[(\omega \wedge \eta) \wedge \xi = \omega \wedge (\eta \wedge \xi).\]

Given any basis \((\varepsilon^i)\) for \(V^*\), we can express every exterior form \(\omega \in \Lambda^k(V^*)\) as

\[\omega = \omega_I \varepsilon^I,\]

where we use the summation convention and \(I\) ranges over \(\mathcal{I}_k\). We then compute

\begin{align}(\omega \wedge \eta) \wedge \xi &= (\omega_I \varepsilon^I \wedge \eta_J \varepsilon^J) \wedge \xi_K \varepsilon^K \\&= \omega_I \eta_J \xi_K((\varepsilon^I \wedge \varepsilon^J) \wedge \varepsilon^K) \\&= \omega_I \eta_J \xi_K((\varepsilon^{IJ}) \wedge \varepsilon^K) \\&= \omega_I \eta_J \xi_K(\varepsilon^{IJK}) \\&= \omega_I \eta_J \xi_K(\varepsilon^{I} \wedge \varepsilon^{JK}) \\&= \omega_I \eta_J \xi_K(\varepsilon^{I} \wedge (\varepsilon^{J} \wedge \varepsilon^{K})) \\&= (\omega_I \varepsilon^{I} \wedge (\eta_J\varepsilon^{J} \wedge \xi_K \varepsilon^{K})) \\&= \omega \wedge (\eta \wedge \xi).\end{align}

Because the wedge product is associative, parentheses are usually omitted.

Anticommutativity

If \(\omega \in \Lambda^k(V^*)\) and \(\eta \in \Lambda^l(V^*)\), then

\[\omega \wedge \eta = (-1)^{kl}\eta \wedge \omega.\]

First, note that it is possible to permute a multi-index \(IJ\) into \(JI\) via a permutation \(\tau\) that proceeds as follows:

\begin{align}i_1,\dots,i_k,j_1,\dots,j_l &\rightarrow i_1,\dots,j_1,i_k,\dots,j_l \\&\rightarrow i_1,\dots,j_1,j_2,i_k,\dots,j_l \\&\rightarrow\dots \\&\rightarrow i_1,\dots,j_1,\dots,j_l,i_k \\&\rightarrow\dots \\&\rightarrow j_1,\dots,j_l,i_1,\dots,i_k.\end{align}

In other words, each of the \(k\) indices in \(I\), beginning with the last and ending with the first, is moved, via a sequence of \(l\) adjacent transpositions, to the end of the multi-index, resulting in \(kl\) transpositions total. Thus, if we denote the composite permutation as \(\tau\), then \(\mathrm{sgn}(\tau) = (-1)^{kl}\).

We compute

\begin{align}\omega \wedge \eta &= \omega_I\varepsilon^I \wedge \eta_J\varepsilon^J \\&= \omega_I\eta_J(\varepsilon^I \wedge \varepsilon^J) \\&= \omega_I\eta_J(\varepsilon^{IJ}) \\&= \omega_I\eta_J((\mathrm{sgn}~\tau)\varepsilon^{JI}) \\&= \omega_I\eta_J((\mathrm{sgn}~\tau)\varepsilon^J \wedge \varepsilon^I) \\&= (\mathrm{sgn}~\tau)(\eta_J\varepsilon^J \wedge \omega_I\varepsilon^I) \\&= (\mathrm{sgn}~\tau)(\eta \wedge \omega) \\&= (-1)^{kl}\eta \wedge \omega.\end{align}

Elementary k-Covectors

Given any basis \((\varepsilon^i)\) for \(V^*\) and multi-index \(I = (i_1,\dots,i_k)\), then

\[\varepsilon^{i_1} \wedge \dots \wedge \varepsilon^{i_k} = \varepsilon^I.\]

This can be demonstrated via induction on \(k\), starting with \(k=1\), in which case the equation reduces to the identity \(\varepsilon^i = \varepsilon^i\). If the hypothesis holds for \(k=n\) for some \(n \gt 1\), then

\begin{align}(\varepsilon^{i_1} \wedge \dots \wedge \varepsilon^{i_n}) \wedge \varepsilon^{i_{n+1}} &= \varepsilon^{I_n} \wedge \varepsilon^{i_{n+1}} \\&= \varepsilon^{I_{n+1}}.\end{align}

Generalized Determinant

For any covectors \(\omega^1,\dots,\omega^k\) and vectors \(v_1,\dots,v_k\),

\[\omega^1 \wedge \dots \wedge \omega^k(v_1,\dots,v_k) = \mathrm{det}(\omega^j(v_i)).\]

Note that

\begin{align}\omega^1 \wedge \dots \wedge \omega^k(v_1,\dots,v_k) &= k!\mathrm{Alt}(\omega^1 \otimes \dots \otimes \omega^k)(v_1,\dots,v_k) \\&= \mathrm{det}(\omega^j(v_i)).\end{align}

Uniqueness of the Wedge Product

The wedge product is the unique operation satisfying the properties enumerated in the previous section. Suppose that there is another product satisfying the same properties, which we will denote as \(\bar{\wedge}\). Then, since

\[\varepsilon^{i_1} \bar{\wedge} \dots \bar{\wedge} \varepsilon^{i_k} = \varepsilon^I,\]

and

\[\varepsilon^{j_1} \bar{\wedge} \dots \bar{\wedge} \varepsilon^{j_l} = \varepsilon^J,\]

it follows that

\[(\varepsilon^{i_1} \bar{\wedge} \dots \bar{\wedge} \varepsilon^{i_k}) \bar{\wedge} (\varepsilon^{j_1} \bar{\wedge} \dots \bar{\wedge} \varepsilon^{j_l}) = \varepsilon^{IJ}.\]

Then, we compute

\begin{align}\omega \bar{\wedge} \eta &= \omega_I\varepsilon^I \bar{\wedge} \eta_J\varepsilon^J \\&= \omega_I\eta_J\varepsilon^I \bar{\wedge} \varepsilon^J \\&= \omega_I\eta_J\varepsilon^{IJ} \\&= \omega_I\eta_J\varepsilon^I \wedge \varepsilon^J \\&= \omega_I\varepsilon^I \wedge \eta_J\varepsilon^J \\&= \omega \wedge \eta.\end{align}

Vector-Valued Exterior Forms

There is an isomorphism \(\Lambda^k(V^*) \otimes W \cong \mathrm{Alt}^k(V, W)\) witnessed by the following linear map:

\[\varphi(\omega \otimes w)(v_1,\dots,v_k) = \omega(v_1,\dots,v_k) \cdot w.\]

This map is only explicitly defined for "pure" tensors \(\omega \otimes w\), but, since such tensors span the tensor product space \(\Lambda^k(V^*) \otimes W\), all tensors in \(\Lambda^k(V^*) \otimes W\) are linear combinations of such "pure" tensors, and thus the above definition is sufficient.

Given a basis \((E_1,\dots,E_n)\) for \(V\) with dual basis \((\varepsilon^1,\dots,\varepsilon^n)\) for \(V^*\), the inverse map is

\[\varphi^{-1}(\alpha) = \sum_{I \in \mathcal{I}_k}\varepsilon ^I \otimes \alpha_I,\]

where

\[\alpha_I = \alpha\left(E_{i_1},\dots,E_{i_k}\right).\]

Since these terms are multilinear, it is sufficient to verify their application to an arbitrary sequence of basis vectors \((E_{j_1},\dots,E_{j_k})\) for a multi-index \(J = (j_1,\dots,j_k)\):

\begin{align}\varphi(\varphi^{-1}(\alpha))\left(E_{j_1},\dots,E_{j_k}\right) &= \varphi\left(\sum_{I \in \mathcal{I}_k}\varepsilon^I \otimes \alpha_I\right)\left(E_{j_1},\dots,E_{j_k}\right) \\&= \left[\sum_{I \in \mathcal{I}_k}\varphi\left(\varepsilon^I \otimes \alpha_I\right)\right]\left(E_{j_1},\dots,E_{j_k}\right) \\&= \sum_{I \in \mathcal{I}_k}\left[\varphi\left(\varepsilon^I \otimes \alpha_I\right)\left(E_{j_1},\dots,E_{j_k}\right)\right] \\&= \sum_{I \in \mathcal{I}_k}\varepsilon^I\left(E_{j_1},\dots,E_{j_k}\right) \cdot \alpha_I \\&= \sum_{I \in \mathcal{I}_k}\delta^I_J\cdot \alpha_I \\&= \alpha_J \\&= \alpha\left(E_{j_1},\dots,E_{j_k}\right) .\end{align}

Conversely, we compute

\begin{align}\varphi^{-1}(\varphi(\omega \otimes w)) &= \sum_{I \in \mathcal{I}_k}\varepsilon^I \otimes \left(\varphi(\omega \otimes w)\left(E_{i_1},\dots,E_{i_k}\right)\right) \\&= \sum_{I \in \mathcal{I}_k}\varepsilon^I \otimes \left(\omega\left(E_{i_1},\dots,E_{i_k}\right) \cdot w\right) \\&= \sum_{I \in \mathcal{I}_k}\varepsilon^I \otimes \left(\omega_I \cdot w\right) \\&= \sum_{I \in \mathcal{I}_k}\left(\omega_I\varepsilon^I \otimes w\right) \\&= \left(\sum_{I \in \mathcal{I}_k}\omega_I\varepsilon^I\right) \otimes w \\&= \omega \otimes w.\end{align}

Note that, although a basis was used to exhibit an inverse function, the isomorphism \(\varphi\) itself is canonical (basis-independent).

Recall that, given a basis \((E^V_i)\) for \(V\) with dual basis \(\varepsilon_V^I)\) for \(V^*\) and a basis \((E^W_i)\) for \(W\), the tensor product space \(\Lambda^k(V^*) \otimes W\) has basis

\[\{\varepsilon_V^I \otimes E^W_i \mid I \in \mathcal{I}_k, 1 \leq i \leq \mathrm{dim}(W)\},\]

and each tensor \(\alpha \in \Lambda^k(V^*) \otimes W\) can be written as

\[\alpha = \alpha^i_I \cdot \left(\varepsilon_V^I \otimes E^W_i\right),\]

where

\[\alpha^i_I = \alpha\left(\varepsilon_V^I, E^W_i\right).\]

It then follows that

\[\alpha = \alpha^i_I \cdot \left(\varepsilon_V^I \otimes E^W_i\right) = \left(\alpha^i_I \cdot \varepsilon_V^I\right) \otimes E^W_i,\]

and, thus, if we define functions

\[\alpha^i = \alpha^i_I \cdot \varepsilon_V^I,\]

then

\[\alpha = \alpha^i \otimes E^W_i.\]

Thus, every element of \(\Lambda^k(V^*) \otimes W\) can be written in this form, which is analogous to a linear combination. Using the isomorphism, we can define a multilinear map \(\alpha^iE^W_i\) as follows:

\[\alpha^iE^W_i(v_1,\dots,v_k) = \varphi\left(\alpha^i \otimes E^W_i\right)(v_1,\dots,v_k) = \alpha^i(v_1,\dots,v_k) \cdot E^W_i.\]

Thus, since the map \(\alpha\) can be written in terms of unique component functions \( \alpha = \alpha^iE^W_i\), this implies that the \(\alpha^i\) as defined above coincide with the component functions. We can also prove this equivalence by defining \(\alpha^i\) to be the component functions and then working in the opposite direction:

\begin{align}\varphi^{-1}\left(\alpha\right) &= \varphi^{-1}\left(\alpha\right) \\&= \varphi^{-1}\left(\alpha^iE^W_i\right) \\&= \varepsilon^I \otimes \left(\alpha^iE^W_i\right)_I \\&= \varepsilon^I \otimes \left(\alpha^iE^W_i\left(E_{i_1},\dots,E_{i_k}\right)\right)\\&= \varepsilon^I \otimes \left(\alpha^i\left(E_{i_1},\dots,E_{i_k}\right)E^W_i\right) \\&= \varepsilon^I \otimes \left(\alpha^i_IE^W_i\right) \\&= \alpha^i_I\left(\varepsilon^I \otimes E^W_i\right).\end{align}

Then, since the coefficients \(\alpha^i_I\) are unique, it follows that \(\alpha^i\left(E^V_{i_1},\dots,E^V_{i_k}\right)\) = \(\alpha^i_I\), and the function \(\alpha^i = \alpha^i_I \varepsilon_W^I\) is the same as the coordinate function \(\alpha^i\), which can be represented as \(\alpha^i\left(E^V_{i_1},\dots,E^V_{i_k}\right)\varepsilon_W^I\).

Thus, whether we start with tensor components or the component functions, the functions \(\alpha^i\) defined coincide regardless.

We may now state the definition of a vector-valued exterior form. A vector-valued exterior form is equivalently any of the following:

  • An element of \(\Lambda^k(V^*) \otimes W\); each such element is expressible as \(\alpha^i \otimes E^W_i\) for \(\alpha^i \in \Lambda^k(V^*) \).
  • An element of \(\mathrm{Alt}^k(V, W)\); each such element is expressible as \(\alpha^iE^W_i\) for \(\alpha^i \in \Lambda^k(V^*) \).

Thus, a vector-valued exterior form is just a collection of \(n\) scalar-valued exterior forms, arranged into a "linear combination" with a given selection of basis vectors.

The Exterior Algebra

We will only briefly mention the related algebraic constructions.

The space \(\Lambda^k(V^*)\) is called the \(\mathbf{k}\)-th exterior power of \(V^{*}\). We have only given a concrete description for this space in terms of alternating, multi-linear maps. It is possible to define the \(k\)-th exterior power for arbitrary vector spaces, using the same techniques used to define abstract tensor product spaces. We will not describe the abstract construction in this post. Each of the exterior powers can be joined together into a vector space \(\Lambda(V^*)\), called the exterior algebra of \(V^*\):

\[\Lambda(V^*) = \bigoplus_{k=0}^n \Lambda^k(V^*).\]

This space is also an associative algebra. This algebra is a graded, anticommutative algebra.