The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
It is fairly easy to use OpenMx to compare numerical and analytic
function derivatives. This vignette shows how to do it. The main tool
that we are going to use are two custom compute plans called
aPlan
and nPlan
.
library(OpenMx)
<- mxComputeSequence(list( #analytic
aPlan mxComputeOnce('fitfunction', c('gradient')),
mxComputeReportDeriv()))
<- mxComputeSequence(list( #numerical
nPlan mxComputeNumericDeriv(analytic = FALSE, hessian=FALSE, checkGradient = FALSE),
mxComputeReportDeriv()))
Now that we have the plans ready, we can use them to debug a fitfunction. Here’s a fitfunction from the test suite that is somewhat contrived, but can serve our pedagogical needs.
<- mxMatrix("Full", rnorm(1), free=TRUE, nrow=1, ncol=1, labels="m1", name="mat1")
mat1 <- mxAlgebra(-.5 * (log(2*pi) + log(2) + (mat1[1,1])^2/2), name = "obj")
obj <- mxAlgebra(-(mat1[1,1]) + 2, name = "grad", dimnames=list("m1", NULL))
grad <- mxModel("mv1", mat1, obj, grad,
mv1 mxFitFunctionAlgebra("obj", gradient="grad"))
Since we are not very good at calculus, the gradient function contains some errors.
<- mxRun(mxModel(mv1, nPlan), silent = TRUE)
nu <- mxRun(mxModel(mv1, aPlan), silent = TRUE)
an
cbind(numerical=nu$output$gradient, analytic=an$output$gradient)
## numerical analytic
## m1 -0.3987463 1.202507
The optimizer is not run so we get the results immediately, even for large complex models. The function also does not need to be (approximately) convex. Any function will do.
The numerical approximation can be pretty different from the analytic even when there are no errors. We can try another point in the parameter space.
<- runif(2, -10,10)
p1 <- omxSetParameters(mv1, labels = 'm1', values=p1)
mv1
<- mxRun(mxModel(mv1, nPlan), silent = TRUE)
nu <- mxRun(mxModel(mv1, aPlan), silent = TRUE)
an
cbind(numerical=nu$output$gradient, analytic=an$output$gradient)
## numerical analytic
## m1 1.141662 4.283324
The results do not correspond very closely so we look for math errors. Indeed, there are errors in the gradient function. We replace it with the correct gradient,
<- mxAlgebra(-(mat1[1,1])/2, name = "grad", dimnames=list("m1", NULL))
grad <- mxModel(mv1, grad) mv2
Let’s check the correspondence now.
<- mxRun(mxModel(mv2, nPlan), silent = TRUE)
nu <- mxRun(mxModel(mv2, aPlan), silent = TRUE)
an
cbind(numerical=nu$output$gradient, analytic=an$output$gradient)
## numerical analytic
## m1 1.141662 1.141662
Wow, looks much better! Still, it is prudent to check at a few more points.
<- omxSetParameters(mv2, labels = 'm1', values=rnorm(1))
mv2
<- mxRun(mxModel(mv2, nPlan), silent = TRUE)
nu <- mxRun(mxModel(mv2, aPlan), silent = TRUE)
an
cbind(numerical=nu$output$gradient, analytic=an$output$gradient)
## numerical analytic
## m1 0.1449633 0.1449633
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.