The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
stokes
packageTo cite the stokes
package in publications please use
(Hankin 2022c). Ordinary differential
calculus may be formalized and generalized to arbitrary-dimensional
oriented manifolds using the exterior calculus. Here I show how the
stokes
package furnishes functionality for working with the
exterior calculus, and provide numerical verification of a number of
theorems. Notation follows that of Spivak
(1965), and Hubbard and Hubbard
(2015).
Recall that a \(k\)-tensor is a multilinear map \(S\colon V^k\longrightarrow\mathbb{R}\), where \(V=\mathbb{R}^n\) is considered as a vector space; Spivak denotes the space of multilinear maps as \(\mathcal{J}^k(V)\). Formally, multilinearity means
\[ S{\left(v_1,\ldots,av_i,\ldots,v_k\right)} = a\cdot S{\left(v_1,\ldots,v_i,\ldots,v_k\right)} \]
and
\[ S{\left(v_1,\ldots,v_i+{v_i}',\ldots,v_k\right)}=S{\left(v_1,\ldots,v_i,\ldots,x_v\right)}+ S{\left(v_1,\ldots,{v_i}',\ldots,v_k\right)}. \]
where \(v_i\in V\). If \(S\in\mathcal{J}^k(V)\) and \(T\in\mathcal{J}^l(V)\), then we may define \(S\otimes T\in\mathcal{J}^{k+l}(V)\) as
\[ S\otimes T{\left(v_1,\ldots,v_k,v_{k+1},\ldots,v_{k+l}\right)}= S{\left(v_1,\ldots,v_k\right)}\cdot T{\left(v_1,\ldots,v_l\right)}. \]
Spivak observes that \(\mathcal{J}^k(V)\) is spanned by the \(n^k\) products of the form
\[ \phi_{i_1}\otimes\phi_{i_2}\otimes\cdots\otimes\phi_{i_k}\qquad 1\leq i_i,i_2,\ldots,i_k\leq n \]
where \(v_1,\ldots,v_k\) is a basis for \(V\) and \(\phi_i{\left(v_j\right)}=\delta_{ij}\); we can therefore write
\[ S=\sum_{1\leq i_1,\ldots,i_k\leq n} a_{i_1\ldots i_k} \phi_{i_1}\otimes\cdots\otimes\phi_{i_k}. \]
The space spanned by such products has a natural representation in R
as an array of dimensions \(n\times\cdots\times n=n^k\). If
A
is such an array, then the element
A[i_1,i_2,...,i_k]
is the coefficient of \(\phi_{i_1}\otimes\ldots\otimes\phi_{i_k}\).
However, it is more efficient and conceptually cleaner to consider a
sparse array, as implemented by the spray
package.
We will consider the case \(n=5,k=4\),
so we have multilinear maps from \(\left(\mathbb{R}^5\right)^4\) to \(\mathbb{R}\). Below, we will test algebraic
identities in R using the idiom furnished by the stokes package. For our
example we will define \(S=1.5\phi_5\otimes\phi_1\otimes\phi_1\otimes\phi_1+2.5\phi_1\otimes\phi_1\otimes\phi_2\otimes\phi_3+3.5\phi_1\otimes\phi_3\otimes\phi_4\otimes\phi_2\)
using a matrix with three rows, one per term, and whose rows correspond
to each term’s tensor products of the \(\phi\)’s. We first have to load the
stokes
package:
Then the idiom is straightforward:
## [,1] [,2] [,3] [,4]
## [1,] 5 1 1 1
## [2,] 1 1 2 3
## [3,] 1 3 4 2
## A linear map from V^4 to R with V=R^5:
## val
## 1 3 4 2 = 3.5
## 1 1 2 3 = 2.5
## 5 1 1 1 = 1.5
Observe that, if stored as an array of size \(n^k\), \(S\) would have \(5^4=625\) elements, all but three of which
are zero. So \(S\) is a 4-tensor,
mapping \(V^4\) to \(\mathbb{R}\), where \(V=\mathbb{R}^5\). Here we have \(S=1.5\phi_5\otimes\phi_1\otimes\phi_1\otimes\phi_1+2.5\phi_1\otimes\phi_1\otimes\phi_2\otimes\phi_3+3.5\phi_1\otimes\phi_3\otimes\phi_4\otimes\phi_2\).
Note that in some implementations the row order of object S
will differ from that of M
; this phenomenon is due to the
underlying C
implementation using the STL map
class; see the disordR
package (Hankin 2022a) and is discussed in more detail
in the mvp
package (Hankin
2022b).
First, we will define \(E\) to be a random point in \(V^k\) in terms of a matrix:
## [,1] [,2] [,3] [,4]
## [1,] 1.2629543 -1.539950042 0.7635935 -0.4115108
## [2,] -0.3262334 -0.928567035 -0.7990092 0.2522234
## [3,] 1.3297993 -0.294720447 -1.1476570 -0.8919211
## [4,] 1.2724293 -0.005767173 -0.2894616 0.4356833
## [5,] 0.4146414 2.404653389 -0.2992151 -1.2375384
Recall that \(n=5\), \(k=4\), so \(E\in\left(\mathbb{R}^5\right)^4\). We can evaluate \(S\) at \(E\) as follows:
## [1] -3.068997
Tensors have a natural vector space structure; they may be added and subtracted, and multiplied by a scalar, the same as any other vector space. Below, we define a new tensor \(S_1\) and work with \(2S-3S_1\):
## A linear map from V^4 to R with V=R^5:
## val
## 1 3 4 2 = 7
## 1 1 2 3 = 5
## 5 1 1 1 = 3
## 1 1 1 2 = -12
## 1 1 2 1 = -9
## 1 2 1 1 = -6
## 2 1 1 1 = -3
We may verify that tensors are linear using package idiom:
LHS <- as.function(2*S-3*S1)(E)
RHS <- 2*as.function(S)(E) -3*as.function(S1)(E)
c(lhs=LHS,rhs=RHS,diff=LHS-RHS)
## lhs rhs diff
## 2.374816e+00 2.374816e+00 -4.440892e-16
(that is, identical up to numerical precision).
Testing multilinearity is straightforward in the package. To do this,
we need to define three matrices E1,E2,E3
corresponding to
points in \(\left(\mathbb{R}^5\right)^4\) which are
identical except for one column. In E3
, this column is a
linear combination of the corresponding column in E2
and
E3
:
E1 <- E
E2 <- E
E3 <- E
x1 <- rnorm(n)
x2 <- rnorm(n)
r1 <- rnorm(1)
r2 <- rnorm(1)
E1[,2] <- x1
E2[,2] <- x2
E3[,2] <- r1*x1 + r2*x2
Then we can verify the multilinearity of \(S\) by coercing to a function which is
applied to E1, E2, E3
:
## lhs rhs diff
## -0.5640577 -0.5640577 0.0000000
(that is, identical up to numerical precision). Note that this is not equivalent to linearity over \(V^{nk}\):
E1 <- matrix(rnorm(n*k),n,k)
E2 <- matrix(rnorm(n*k),n,k)
LHS <- f(r1*E1+r2*E2)
RHS <- r1*f(E1)+r2*f(E2)
c(lhs=LHS,rhs=RHS,diff=LHS-RHS)
## lhs rhs diff
## 0.1731245 0.3074186 -0.1342941
Given two k-tensor objects \(S,T\) we can form the tensor product \(S\otimes T\), defined as
\[ S\otimes T{\left(v_1,\ldots,v_k,v_{k+1},\ldots, v_{k+l}\right)}= S{\left(v_1,\ldots v_k\right)} \cdot T{\left(v_{k+1},\ldots v_{k+l}\right)} \]
We will calculate the tensor product of two tensors
S1,S2
defined as follows:
## A linear map from V^2 to R with V=R^4:
## val
## 3 4 = 3
## 2 3 = 2
## 1 2 = 1
## A linear map from V^3 to R with V=R^6:
## val
## 2 4 6 = 1
## 1 3 5 = 1
The R idiom for \(S1\otimes S2\)
would be tensorprod()
, or %X%
:
## A linear map from V^5 to R with V=R^6:
## val
## 1 2 1 3 5 = 1
## 3 4 1 3 5 = 3
## 1 2 2 4 6 = 1
## 2 3 2 4 6 = 2
## 2 3 1 3 5 = 2
## 3 4 2 4 6 = 3
Then, for example:
E <- matrix(rnorm(30),6,5)
LHS <- as.function(tensorprod(S1,S2))(E)
RHS <- as.function(S1)(E[,1:2]) * as.function(S2)(E[,3:5])
c(lhs=LHS,rhs=RHS,diff=LHS-RHS)
## lhs rhs diff
## -1.048329 -1.048329 0.000000
(that is, identical up to numerical precision).
An alternating form is a multilinear map \(T\) satisfying
\[ T{\left(v_1,\ldots,v_i,\ldots,v_j,\ldots,v_k\right)}= -T{\left(v_1,\ldots,v_j,\ldots,v_i,\ldots,v_k\right)} \]
(or, equivalently, \(T{\left(v_1,\ldots,v_i,\ldots,v_i,\ldots,v_k\right)}= 0\)). We write \(\Lambda^k(V)\) for the space of all alternating multilinear maps from \(V^k\) to \(\mathbb{R}\). Spivak gives \(\operatorname{Alt}\colon\mathcal{J}^k(V)\longrightarrow\Lambda^k(V)\) defined by
\[\operatorname{Alt}(T)\left(v_1,\ldots,v_k\right)= \frac{1}{k!}\sum_{\sigma\in S_k}\operatorname{sgn}(\sigma)\cdot T{\left(v_{\sigma(1)},\ldots,v_{\sigma(k)}\right)} \]
where the sum ranges over all permutations of \(\left[n\right]=\left\{1,2,\ldots,n\right\}\) and \(\operatorname{sgn}(\sigma)\in\pm 1\) is the sign of the permutation. If \(T\in\mathcal{J}^k(V)\) and \(\omega\in\Lambda^k(V)\), it is straightforward to prove that \(\operatorname{Alt}(T)\in\Lambda^k(V)\), \(\operatorname{Alt}\left(\operatorname{Alt}\left(T\right)\right)=\operatorname{Alt}\left(T\right)\), and \(\operatorname{Alt}\left(\omega\right)=\omega\).
In the stokes package, this is effected by the Alt()
function:
## A linear map from V^2 to R with V=R^4:
## val
## 3 4 = 3
## 2 3 = 2
## 1 2 = 1
## A linear map from V^2 to R with V=R^4:
## val
## 4 3 = -1.5
## 3 2 = -1.0
## 2 3 = 1.0
## 3 4 = 1.5
## 2 1 = -0.5
## 1 2 = 0.5
Verifying that S1
is in fact alternating is
straightforward:
E <- matrix(rnorm(8),4,2)
Erev <- E[,2:1]
as.function(Alt(S1))(E) + as.function(Alt(S1))(Erev) # should be zero
## [1] 0
However, we can see that this form for alternating tensors (here
called \(k\)-forms) is inefficient and
highly redundant: in this example there is a 1 2
term and a
2 1
term (the coefficients are equal and opposite). In this
example we have \(k=2\) but in general
there would be potentially \(k!\)
essentially repeated terms which collectively require only a single
coefficient. The package provides kform
objects which are
inherently alternating using a more efficient representation; they are
described using wedge products which are discussed next.
This section follows the exposition of Hubbard and Hubbard, who introduce the exterior calculus starting with a discussion of elementary forms, which are alternating forms with a particularly simple structure. An example of an elementary form would be \(\mathrm{d}x_1\wedge\mathrm{d}x_3\) [treated as an indivisible entity], which is an alternating multilinear map from \(\mathbb{R}^n\times\mathbb{R}^n\) to \(\mathbb{R}\) with
\[ \left( \mathrm{d}x_1\wedge\mathrm{d}x_3 \right)\left( \begin{pmatrix}a_1\\a_2\\a_3\\ \vdots\\ a_n\end{pmatrix}, \begin{pmatrix}b_1\\b_3\\b_3\\ \vdots\\ b_n\end{pmatrix} \right)=\mathrm{det} \begin{pmatrix} a_1 & b_1 \\ a_3 & b_3\end{pmatrix} =a_1b_3-a_3b_1 \]
That this is alternating follows from the properties of the determinant. In general of course, \(\mathrm{d}x_i\wedge\mathrm{d}x_j\left( \begin{pmatrix}a_1\\ \vdots\\ a_n\end{pmatrix}, \begin{pmatrix}b_1\\ \vdots\\ b_n\end{pmatrix} \right)=\mathrm{det} \begin{pmatrix} a_i & b_i \\ a_j & b_j\end{pmatrix}\). Because such objects are linear, it is possible to consider sums of elementary forms, such as \(d\mathrm{x}_1\wedge\mathrm{d}x_2 + 3 \mathrm{d}x_2\wedge\mathrm{d}x_3\) with
\[ \left( \mathrm{d}x_1\wedge\mathrm{d}x_2 + 3\mathrm{d}x_2\wedge\mathrm{d}x_3 \right)\left( \begin{pmatrix}a_1\\a_2\\ \vdots\\ a_n\end{pmatrix}, \begin{pmatrix}b_1\\b_2\\ \vdots\\ b_n\end{pmatrix} \right)=\mathrm{det} \begin{pmatrix} a_1 & b_1\\ a_2 & b_2\end{pmatrix} +3\mathrm{det} \begin{pmatrix} a_2 & b_2\\ a_3 & b_3\end{pmatrix} \]
or even \(K=\mathrm{d}x_1\wedge\mathrm{d}x_2\wedge\mathrm{d}x_3 +5\mathrm{d}x_1\wedge\mathrm{d}x_2\wedge\mathrm{d}x_4\) which would be a linear map from \(\left(\mathbb{R}^n\right)^3\) to \(\mathbb{R}\) with
\[ \left( \mathrm{d}x_4\wedge\mathrm{d}x_2\wedge\mathrm{d}x_3 +5\mathrm{d}x_1\wedge\mathrm{d}x_2\wedge\mathrm{d}x_4 \right)\left( \begin{pmatrix}a_1\\a_2\\ \vdots\\ a_n\end{pmatrix}, \begin{pmatrix}b_1\\b_2\\ \vdots\\ b_n\end{pmatrix}, \begin{pmatrix}c_1\\c_2\\ \vdots\\ c_n\end{pmatrix} \right)=\mathrm{det} \begin{pmatrix} a_4 & b_4 & c_4\\ a_2 & b_2 & c_2\\ a_3 & b_3 & c_3 \end{pmatrix} +5\mathrm{det} \begin{pmatrix} a_1 & b_1 & c_1\\ a_2 & b_2 & c_2\\ a_4 & b_4 & c_4 \end{pmatrix}. \]
Defining \(K\) has ready R idiom in which we define a matrix whose rows correspond to the differentials in each term:
## [,1] [,2] [,3]
## [1,] 4 2 3
## [2,] 1 4 2
## An alternating linear map from V^3 to R with V=R^4:
## val
## 1 2 4 = -5
## 2 3 4 = 1
Function as.kform()
takes each row of M
and
places the elements in increasing order; the coefficient will change
sign if the permutation is odd. Note that the order of the rows in
K
is immaterial and indeed in some implementations will
appear in a different order: the stokes package uses the
spray
package, which in turn utilises the STL map class of
C++.
In the previous section we defined objects such as “\(\mathrm{d}x_1\wedge\mathrm{d}x_6\)” as a single entity. Here I define the elementary form \(\mathrm{d}x_i\) formally and in the next section discuss the wedge product \(\wedge\). The elementary form \(\mathrm{d}x_i\) is simply a map from \(\mathbb{R}^n\) to \(\mathbb{R}\) with \(\mathrm{d}x_i{\left(x_1,x_2,\ldots,x_n\right)}=x_i\). Observe that \(\mathrm{d}x_i\) is an alternating form, even though we cannot swap arguments (because there is only one). Package idiom for creating an elementary form appears somewhat cryptic at first sight, but is consistent (it is easier to understand package idiom for creating more complicated alternating forms, as in the next section). Suppose we wish to work with \(\mathrm{d}x_3\):
dx3 <- as.kform(matrix(3,1,1),1)
options(kform_symbolic_print = NULL) # revert to default print method
dx3
## An alternating linear map from V^1 to R with V=R^3:
## val
## 3 = 1
Interpretation of the output above is not obvious (it is easier to understand the output from more complicated alternating forms, as in the next section), but for the moment observe that \(\mathrm{d}x_3\) is indeed an alternating form, mapping \(\mathbb{R}^n\) to \(\mathbb{R}\) with \(\mathrm{d}x_3{\left(x_1,x_2,\ldots,x_n\right)}=x_3\). Thus, for example:
## [1] 16
## [1] 16
and we see that \(\mathrm{d}x_3\) picks out the third element of a vector. These are linear in the sense that we may add and subtract these elementary forms:
## [1] 13
The wedge product maps two alternating forms to another alternating form; given \(\omega\in\Lambda^k(V)\) and \(\eta\in\Lambda^l(V)\), Spivak defines the wedge product \(\omega\wedge\eta\in\Lambda^{k+l}(V)\) as
\[ \omega\wedge\eta={k+l\choose k\quad l}\operatorname{Alt}(\omega\otimes\eta) \]
and this is implemented in the package by function
wedge()
, or, more idiomatically, ^
:
## An alternating linear map from V^3 to R with V=R^6:
## val
## 1 4 6 = 7
## 3 4 5 = -2
## An alternating linear map from V^2 to R with V=R^7:
## val
## 5 7 = 5
## 4 6 = 4
## 3 5 = 3
## 2 4 = 2
## 1 3 = 1
In symbolic notation, K1
is equal to \(7\mathrm{d}x_1\wedge\mathrm{d}x_4\wedge\mathrm{d}x_6
-2\mathrm{d}x_3\wedge\mathrm{d}x_4\wedge\mathrm{d}x_5\). and
K2
is \(\mathrm{d}x_1\wedge\mathrm{d}x_3+
2\mathrm{d}x_2\wedge\mathrm{d}x_4+ 3\mathrm{d}x_3\wedge\mathrm{d}x_5+
4\mathrm{d}x_4\wedge\mathrm{d}x_6+ 5\mathrm{d}x_5\wedge
\mathrm{d}x_7\). Package idiom for wedge products is
straightforward:
## An alternating linear map from V^5 to R with V=R^7:
## val
## 1 3 4 5 6 = -21
## 1 4 5 6 7 = -35
(we might write the product as \(-35\mathrm{d}x_1\wedge\mathrm{d}x_4\wedge\mathrm{d}x_5\wedge \mathrm{d}x_6\wedge\mathrm{d}x_7 -21\mathrm{d}x_1\wedge\mathrm{d}x_3\wedge\mathrm{d}x_4\wedge\mathrm{d}x_5\wedge\mathrm{d}x_6\)). See how the wedge product eliminates rows with repeated entries, gathers permuted rows together (respecting the sign of the permutation), and expresses the result in terms of elementary forms. The product is a linear combination of two elementary forms; note that only two coefficients out of a possible \({7\choose 5}=21\) are nonzero. Note again that the order of the rows in the product is arbitrary.
The wedge product has formal properties such as distributivity but by far the most interesting one is associativity, which I will demonstrate below:
F1 <- as.kform(matrix(c(3,4,5, 4,6,1,3,2,1),3,3,byrow=TRUE))
F2 <- as.kform(cbind(1:6,3:8),1:6)
F3 <- kform_general(1:8,2)
(F1 ^ F2) ^ F3
## An alternating linear map from V^7 to R with V=R^8:
## val
## 1 2 4 5 6 7 8 = -5
## 2 3 4 5 6 7 8 = 6
## 1 2 3 5 6 7 8 = 11
## 1 2 3 4 5 6 8 = 1
## 1 2 3 4 5 7 8 = -5
## 1 2 3 4 6 7 8 = 2
## 1 2 3 4 5 6 7 = 1
## 1 3 4 5 6 7 8 = -2
## An alternating linear map from V^7 to R with V=R^8:
## val
## 2 3 4 5 6 7 8 = 6
## 1 2 4 5 6 7 8 = -5
## 1 2 3 4 5 7 8 = -5
## 1 2 3 4 5 6 8 = 1
## 1 2 3 4 6 7 8 = 2
## 1 2 3 4 5 6 7 = 1
## 1 3 4 5 6 7 8 = -2
## 1 2 3 5 6 7 8 = 11
Note carefully in the above that the terms in
(F1 ^ F2) ^ F3
and F1 ^ (F2 ^ F3)
appear in a
different order. They are nevertheless algebraically identical, as we
may demonstrate by calculating their difference:
## The zero alternating linear map from V^7 to R with V=R^n:
## empty sparse array with 7 columns
Spivak observes that \(\Lambda^k(V)\) is spanned by the \(n\choose k\) wedge products of the form
\[ \mathrm{d}x_{i_1}\wedge\mathrm{d}x_{i_2}\wedge\ldots\wedge\mathrm{d}x_{i_k}\qquad 1\leq i_i<i_2<\cdots <i_k\leq n \]
where these products are the elementary forms (compare \(\mathcal{J}^k(V)\), which is spanned by
\(n^k\) elementary forms). Formally,
multilinearity means every element of the space \(\Lambda^k(V)\) is a linear combination of
elementary forms, as illustrated in the package by function
kform_general()
. Consider the following idiom:
## An alternating linear map from V^2 to R with V=R^4:
## val
## 3 4 = 6
## 2 4 = 5
## 1 4 = 4
## 2 3 = 3
## 1 3 = 2
## 1 2 = 1
Object Krel
is a two-form, specifically a map from \(\left(\mathbb{R}^4\right)^2\) to \(\mathbb{R}\). Observe that
Krel
has \({4\choose
2}=6\) components, which do not appear in any particular order.
Addition of such \(k\)-forms is
straightforward in R idiom but algebraically nontrivial:
K1 <- as.kform(matrix(1:4,2,2),c(1,109))
K2 <- as.kform(matrix(c(1,3,7,8,2,4),ncol=2,byrow=TRUE),c(-1,5,4))
K1
## An alternating linear map from V^2 to R with V=R^4:
## val
## 2 4 = 109
## 1 3 = 1
## An alternating linear map from V^2 to R with V=R^8:
## val
## 2 4 = 4
## 7 8 = 5
## 1 3 = -1
## An alternating linear map from V^2 to R with V=R^8:
## val
## 2 4 = 113
## 7 8 = 5
In the above, note how the \(\mathrm{d}x_2\wedge\mathrm{d}x_4\) terms
combine [to give 2 4 = 113
] and the \(\mathrm{d}x_1\wedge\mathrm{d}x_3\) term
vanishes by cancellation.
Although the spray form used above is probably the most direct and natural representation of differential forms in numerical work, sometimes we need a more algebraic print method.
## A linear map from V^2 to R with V=R^5:
## val
## 4 5 = 4
## 3 4 = 3
## 2 3 = 2
## 1 2 = 1
we can represent this more algebraically using the
as.symbolic()
function:
## [1] +4 d*e +3 c*d +2 b*c + a*b
In the above, U
is a multilinear map from \(\left(\mathbb{R}^5\right)^2\) to \(\mathbb{R}\). Symbolically, a
represents the map that takes \((a,b,c,d,e)\) to \(a\), b
the map that takes
\((a,b,c,d,e)\) to b
, and
so on. The asterisk *
represents the tensor product \(\otimes\). Alternating forms work similarly
but \(k\)-forms have different
defaults:
## An alternating linear map from V^2 to R with V=R^3:
## val
## 2 3 = 3
## 1 3 = 2
## 1 2 = 1
## [1] +3 dx^dy +2 dw^dy + dw^dx
Note that the wedge product \(\wedge\), although implemented in package
idiom as ^
or %^%
, appears in the symbolic
representation as an ascii caret, ^
.
We can alter the default print method with the
kform_symbolic_print
option, which uses
as.symbolic()
:
## An alternating linear map from V^2 to R with V=R^3:
## +3 dx2^dx3 +2 dx1^dx3 + dx1^dx2
This print option works nicely with the d()
function for
elementary forms:
## An alternating linear map from V^3 to R with V=R^7:
## - dx3^dx5^dx7 + dx1^dx3^dx7 +5 dx2^dx5^dx7 -5 dx1^dx2^dx7
Given a \(k\)-form \(\phi\colon V^k\longrightarrow\mathbb{R}\) and a vector \(\mathbf{v}\in V\), the contraction \(\phi_\mathbf{v}\) of \(\phi\) and \(\mathbf{v}\) is a \(k-1\)-form with
\[ \phi_\mathbf{v}{\left(\mathbf{v}^1,\ldots,\mathbf{v}^{k-1}\right)} = \phi{\left(\mathbf{v},\mathbf{v}^1,\ldots,\mathbf{v}^{k-1}\right)} \]
if \(k>1\); we specify \(\phi_\mathbf{v}=\phi(\mathbf{v})\) if \(k=1\). Verification is straightforward:
## An alternating linear map from V^3 to R with V=R^7:
## val
## 2 4 6 = -6
## 4 6 7 = 5
## 5 6 7 = 4
## 1 3 7 = 7
## 1 5 7 = -12
## 1 4 6 = 8
## 2 3 7 = -2
## 1 2 4 = 1
V <- matrix(runif(21),ncol=3)
LHS <- as.function(o)(V)
RHS <- as.function(contract(o,V[,1]))(V[,-1])
c(LHS=LHS,RHS=RHS,diff=LHS-RHS)
## LHS RHS diff
## 4.512547e-01 4.512547e-01 -5.551115e-17
It is possible to iterate the contraction process; if we pass a
matrix \(V\) to contract()
then this is interpreted as repeated contraction with the columns of
\(V\):
## [1] 0.4512547
If we pass three columns to contract()
the result is a
\(0\)-form:
## [1] 0.4512547
In the above, the result is coerced to a scalar; in order to work
with a formal \(0\)-form (which is
represented in the package as a spray
with a zero-column
index matrix) we can use the lose=FALSE
argument:
## An alternating linear map from V^0 to R with V=R^0:
## val
## = 0.4512547
Suppose we are given a two-form \(\omega=\sum_{i<j}a_{ij}dx_i\wedge dx_j\) and relationships \(dx_i=\sum_rM_{ir}dy_r\), then we would have
\[ \omega = \sum_{i<j} a_{ij}\left(\sum_rM_{ir}dy_r\right)\wedge\left(\sum_rM_{jr}dy_r\right). \]
The general situation would be a \(k\)-form where we would have
\[ \omega=\sum_{i_1<\cdots<i_k}a_{i_1\ldots i_k}dx_{i_1}\wedge\cdots\wedge dx_{i_k} \]
giving
\[\omega = \sum_{i_1<\cdots <i_k}\left[ a_{i_1<\cdots < i_k}\left(\sum_rM_{i_1r}dy_r\right)\wedge\cdots\wedge\left(\sum_rM_{i_kr}dy_r\right)\right]. \]
So \(\omega\) was given in terms of \(dx_1,\ldots,dx_k\) and we have expressed it in terms of \(dy_1,\ldots,dy_k\). So for example if
\[ \omega= dx_1\wedge dx_2 + 5dx_1\wedge dx_3\]
and
\[ \left( \begin{array}{l} dx_1\\ dx_2\\ dx_3 \end{array} \right)= \left( \begin{array}{ccc} 1 & 4 & 7\\ 2 & 5 & 8\\ 3 & 6 & 9\\ \end{array} \right) \left( \begin{array}{l} dy_1\\ dy_2\\ dy_3 \end{array} \right) \]
then
\[ \begin{array}{ccl} \omega &=& \left(1dy_1+4dy_2+7dy_3\right)\wedge \left(2dy_1+5dy_2+8dy_3\right)+ 5\left(1dy_1+4dy_2+7dy_3\right)\wedge \left(3dy_1+6dy_2+9dy_3\right) \\ &=&2dy_1\wedge dy_1+5dy_1\wedge dy_2+\cdots+ 5\cdot 7\cdot 6dx_3\wedge dx_2+ 5\cdot 7\cdot 9dx_3\wedge dx_3+\\ &=& -33dy_1\wedge dy_2-66dy_1\wedge dy_3-33dy_2\wedge dy_3 \end{array} \]
Function pullback()
function does all this:
options(kform_symbolic_print = "dx") # uses dx etc in print method
pullback(dx^dy+5*dx^dz, matrix(1:9,3,3))
## An alternating linear map from V^2 to R with V=R^3:
## -33 dx^dy -66 dx^dz -33 dy^dz
However, it is slow and I am not 100% sure that there isn’t a much
more efficient way to do such a transformation. There are a few tests in
tests/testthat
. Here I show that transformations may be
inverted using matrix inverses:
## An alternating linear map from V^3 to R with V=R^5:
## val
## 2 4 5 = 2
Then we will transform according to matrix M
and then
transform according to the matrix inverse; the functionality works
nicely with magrittr pipes:
## An alternating linear map from V^3 to R with V=R^5:
## val
## 3 4 5 = 0
## 1 2 3 = 0
## 2 3 5 = 0
## 2 3 4 = 0
## 2 4 5 = 2
## 1 2 5 = 0
## 1 3 4 = 0
## 1 2 4 = 0
## 1 3 5 = 0
## 1 4 5 = 0
Above we see many rows with values small enough for the print method
to print an exact zero, but not sufficiently small to be eliminated by
the spray
internals. We can remove the small entries with
zap()
:
## val
## 2 4 5 = 2
See how the result is equal to the original \(k\)-form \(2dy_2\wedge dy_4\wedge dy_5\).
Given a \(k\)-form \(\omega\), Spivak defines the differential of \(\omega\) to be a \((k+1)\)-form \(\mathrm{d}\omega\) as follows. If
\[ \omega = \sum_{ i_1 < i_2 <\cdots<i_k} \omega_{i_1i_2\ldots i_k} \mathrm{d}x^{i_1}\wedge \mathrm{d}x^{i_2}\wedge\cdots\wedge\mathrm{d}x^{i_k} \]
then
\[ \mathrm{d}\omega = \sum_{ i_1 < i_2 <\cdots<i_k} \sum_{\alpha=1}^n D_\alpha\left(\omega_{i_1i_2\ldots i_k}\right) \cdot \mathrm{d}x^{i_1}\wedge \mathrm{d}x^{i_2}\wedge\cdots\wedge\mathrm{d}x^{i_k} \]
where \(D_if(a)=\lim_{h\longrightarrow 0}\frac{f(a^1,\ldots,a^i+h,\ldots,a^n)-f(a^1,\ldots,a^i,\ldots,a^n)}{h}\) is the ordinary \(i^\mathrm{th}\) partial derivative (Spivak, p25). Hubbard and Hubbard take a conceptually distinct approach and define the exterior derivative \(d\phi\) (they use a bold font, \(\mathbf{d}\phi\)) of the \(k\)-form \(\phi\) as the \((k+1)\)-form given by
\[ {d}\phi \left({v}_i,\ldots,{v}_{k+1}\right) = \lim_{h\longrightarrow 0}\frac{1}{h^{k+1}}\int_{\partial P_{x}\left(h{v}_1,\ldots,h{v}_{k+1}\right)}\phi \]
which, by their own account, is a rather opaque mathematical idiom. However, the definition makes sense and it is consistent with Spivak’s definition above. The definition allows one to express the fundamental theorem of calculus in an arbitrary number of dimensions without modification.
It can be shown that
\[ \mathrm{d}{\left(f\,dx_{i_1}\wedge\cdots\wedge\mathrm{d}x_{i_k}\right)}= \mathrm{d}f\wedge\mathrm{d}x_{i_1}\wedge\cdots\wedge\mathrm{d}x_{i_k} \]
where \(f\colon\mathbb{R}^n\longrightarrow\mathbb{R}\)
is a scalar function of position. The package provides
grad()
which, when given a vector \(x_1,\ldots,x_n\) returns the one-form
\[ \sum_{i=1}^n x_idx_i \]
This is useful because \(\mathrm{d}f=\sum_{j=1}^n\left(D_j f\right)\,\mathrm{d}x_j\). Thus
## An alternating linear map from V^1 to R with V=R^4:
## val
## 4 = 1.5
## 3 = -3.2
## 2 = 0.1
## 1 = 0.4
We will use the grad()
function to verify that, in \(\mathbb{R}^n\), a certain \((k-1)\)-form has zero work function.
Motivated by the fact that
\[ F_3=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}} \begin{pmatrix}x\\y\\z\end{pmatrix} \]
is a divergenceless velocity field in \(\mathbb{R}^3\), H&H go on to define [page 548, equation 6.7.16]
\[ \omega_{n}=\mathrm{d}\frac{1}{\left(x_1^2+\ldots +x_n^2\right)^{n/2}}\sum_{i=1}^{n}(-1)^{i-1} x_i\mathrm{d}x_1\wedge\cdots\wedge\widehat{\mathrm{d}x_i}\wedge\cdots\wedge\mathrm{d}x_n \]
(where a hat indicates the absence of a term), and show analytically that \(\mathrm{d}\omega=0\). Here I show this using R idiom. The first thing is to define a function that implements the hat:
So, for example:
## An alternating linear map from V^4 to R with V=R^5:
## val
## 1 2 3 4 = 1
## 1 2 3 5 = 1
## 1 2 4 5 = 1
## 1 3 4 5 = 1
## 2 3 4 5 = 1
Then we can use the grad()
function to calculate \(\mathrm{d}\omega\), using the quotient law
to express the derivatives analytically:
df <- function(x){
n <- length(x)
S <- sum(x^2)
grad(rep(c(1,-1),length=n)*(S^(n/2) - n*x^2*S^(n/2-1))/S^n
)
}
Thus
## An alternating linear map from V^1 to R with V=R^5:
## val
## 5 = -5.67e-05
## 4 = 2.03e-05
## 3 = 8.10e-06
## 2 = -2.84e-05
## 1 = 4.05e-05
Now we can use the wedge product of the two parts to show that the exterior derivative is zero:
## An alternating linear map from V^9 to R with V=R^9:
## val
## 1 2 3 4 5 6 7 8 9 = 0
We can use the package to verify the celebrated fact that, for any
\(k\)-form \(\phi\), \(\mathrm{d}\left(\mathrm{d}\phi\right)=0\).
The first step is to define scalar functions
f1(), f2(), f3()
, all \(0\)-forms:
f1 <- function(w,x,y,z){x + y^3 + x*y*w*z}
f2 <- function(w,x,y,z){w^2*x*y*z + sin(w) + w+z}
f3 <- function(w,x,y,z){w*x*y*z + sin(x) + cos(w)}
Now we need to define elementary \(1\)-forms:
I will demonstrate the theorem by defining a \(2\)-form which is the sum of three elementary two-forms, evaluated at a particular point in \(\mathbb{R}^4\):
We could use slightly slicker R idiom by defining elementary forms
e1,e2,e3
and then defining phi
to be a linear
sum, weighted with \(0\)-forms given by
the (scalar) functions f1,f2,f3
:
e1 <- dw ^ dx
e2 <- dw ^ dy
e3 <- dy ^ dz
phi <-
(
+f1(1,2,3,4) ^ e1
+f2(1,2,3,4) ^ e2
+f3(1,2,3,4) ^ e3
)
phi
## An alternating linear map from V^2 to R with V=R^4:
## val
## 1 3 = 29.84147
## 1 2 = 53.00000
## 3 4 = 25.44960
Now to evaluate first derivatives of f1()
etc at point
\((1,2,3,4)\), using
Deriv()
from the Deriv package:
So Df1
etc are numeric vectors of length 4, for
example:
## w x y z
## 24 13 35 6
To calculate dphi
, or \(\mathrm{d}\phi\), we can use function
grad()
:
## An alternating linear map from V^3 to R with V=R^4:
## val
## 1 3 4 = 30.15853
## 1 2 3 = 23.00000
## 1 2 4 = 6.00000
## 2 3 4 = 11.58385
Now work on the differential of the differential. First evaluate the Hessians (4x4 numeric matrices) at the same point:
Hf1 <- matrix(Deriv(f1,nderiv=2)(1,2,3,4),4,4)
Hf2 <- matrix(Deriv(f2,nderiv=2)(1,2,3,4),4,4)
Hf3 <- matrix(Deriv(f3,nderiv=2)(1,2,3,4),4,4)
For example
## w x y z
## w 0 12 8 6
## x 12 0 4 3
## y 8 4 18 2
## z 6 3 2 0
(note the matrix is symmetric; also note carefully the nonzero diagonal term). But \(dd\phi\) is clearly zero as the Hessians are symmetrical:
ij <- expand.grid(seq_len(nrow(Hf1)),seq_len(ncol(Hf1)))
ddphi <- # should be zero
(
+as.kform(ij,c(Hf1))
+as.kform(ij,c(Hf2))
+as.kform(ij,c(Hf3))
)
ddphi
## The zero alternating linear map from V^2 to R with V=R^n:
## empty sparse array with 2 columns
as expected.
In its most general form, Stokes’s theorem states
\[ \int_{\partial X}\phi=\int_X\mathrm{d}\phi \]
where \(X\subset\mathbb{R}^n\) is a compact oriented \((k+1)\)-dimensional manifold with boundary \(\partial X\) and \(\phi\) is a \(k\)-form defined on a neighborhood of \(X\).
We will verify Stokes, following 6.9.5 of Hubbard in which
\[ \phi= \left(x_1-x_2^2+x_3^3-\cdots\pm x_n^n\right) \left( \sum_{i=1}^n \mathrm{d}x_1\wedge\cdots\wedge\widehat{\mathrm{d}x_i}\wedge\cdots\wedge\mathrm{d}x_n \right) \]
(a hat indicates that a term is absent), and we wish to evaluate \(\int_{\partial C_a}\phi\) where \(C_a\) is the cube \(0\leq x_j\leq a, 1\leq j\leq n\). Stokes tells us that this is equal to \(\int_{C_a}\mathrm{d}\phi\), which is given by
\[ d\phi = \left( 1+2x_2+\cdots + nx_n^{n-1}\right) \mathrm{d}x_1\wedge\cdots\wedge\mathrm{d}x_n \]
and so the volume integral is just
\[ \sum_{j=1}^n \int_{x_1=0}^a \int_{x_2=0}^a \cdots \int_{x_i=0}^a jx_j^{j-1} dx_1 dx_2\ldots dx_n= a^{n-1}\left(a+a^2+\cdots+a^n\right). \]
Stokes’s theorem, being trivial, is not amenable to direct numerical verification but the package does allow slick creation of \(\phi\):
phi <- function(x){
n <- length(x)
sum(x^seq_len(n)*rep_len(c(1,-1),n)) * as.kform(t(apply(diag(n)<1,2,which)))
}
phi(1:9)
## An alternating linear map from V^8 to R with V=R^9:
## val
## 2 3 4 5 6 7 8 9 = 371423053
## 1 2 3 4 5 7 8 9 = 371423053
## 1 3 4 5 6 7 8 9 = 371423053
## 1 2 3 4 6 7 8 9 = 371423053
## 1 2 3 4 5 6 7 9 = 371423053
## 1 2 4 5 6 7 8 9 = 371423053
## 1 2 3 5 6 7 8 9 = 371423053
## 1 2 3 4 5 6 8 9 = 371423053
## 1 2 3 4 5 6 7 8 = 371423053
(recall that phi
is a function that maps \(\mathbb{R}^9\) to 8-forms. Here we choose
\(\left(1,2,\ldots,9\right)\in\mathbb{R}^9\)
and phi(1:9)
as shown above is the resulting 8-form. Thus,
if we write \(\phi_{1:9}\) for
phi(1:9)
we would have \(\phi_{1:9}\colon\left(\mathbb{R}^9\right)^8\longrightarrow\mathbb{R}\),
with package idiom as follows:
## [1] -26620528
Further, \(\mathrm{d}\phi\) is given by
## An alternating linear map from V^9 to R with V=R^9:
## val
## 1 2 3 4 5 6 7 8 9 = 405071317
(observe that dphi(1:9)
is a 9-form, with \(\mathrm{d}\phi_{1:9}\colon\left(\mathbb{R}^9\right)^9\longrightarrow\mathbb{R}\)).
Now consider Spivak’s theorem 4.6 (page 82), which in this context
states that a 9-form is proportional to the determinant of the \(9\times 9\) matrix formed from its
arguments, with constant of proportionality equal to the form evaluated
on the identity matrix \(I_9\)
[formally and more generally, if \(v_1,\ldots,v_n\) is a basis for \(V\), \(\omega\in\Lambda^n(V)\) and \(w_i=\sum a_{ij}v_j\) then \(\omega\left(w_1,\ldots,w_n\right) =
\det\left(a_{ij}\right)\cdot\omega\left(v_1,\ldots v_n\right)\)].
Numerically:
## [1] -9850953
## [1] -9850953
disordR
Package.” https://arxiv.org/abs/2210.03856; arXiv. https://doi.org/10.48550/ARXIV.2210.03856.
mvp
Package.” arXiv. https://doi.org/10.48550/ARXIV.2210.15991.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.