The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
We start by loading the dataset that the mcmc
package
includes. We will use the logit
data set to obtain a
posterior distribution of the model parameters using the
MCMC
function.
library(fmcmc)
data(logit, package = "mcmc")
out <- glm(y ~ x1 + x2 + x3 + x4, data = logit, family = binomial, x = TRUE)
beta.init <- as.numeric(coefficients(out))
To use the Metropolis-Hastings MCMC algorithm, the function should be
(in principle) the log unnormalized posterior. The following block of
code, extracted from the mcmc
package vignette “MCMC
Package Example,” creates the function that we will be using:
lupost_factory <- function(x, y) function(beta) {
eta <- as.numeric(x %*% beta)
logp <- ifelse(eta < 0, eta - log1p(exp(eta)), - log1p(exp(- eta)))
logq <- ifelse(eta < 0, - log1p(exp(eta)), - eta - log1p(exp(- eta)))
logl <- sum(logp[y == 1]) + sum(logq[y == 0])
return(logl - sum(beta^2) / 8)
}
lupost <- lupost_factory(out$x, out$y)
Let’s give it the first try. In this case, we will use the beta
estimates from the estimated GLM model as a starting point for the
algorithm, and we will ask it to sample 1e4 points from the posterior
distribution (nsteps
).
# to get reproducible results
set.seed(42)
out <- MCMC(
initial = beta.init,
fun = lupost,
nsteps = 1e3
)
Since the resulting object is of class mcmc
(from the
coda
R package), we can use all the functions included in
coda
for model diagnostics:
So this chain has very poor mixing, so let’s try again by using a smaller scale for the normal kernel proposal moving it from 1 (the default value) to .2:
# to get reproducible results
set.seed(42)
out <- MCMC(
initial = beta.init,
fun = lupost,
nsteps = 1e3,
kernel = kernel_normal(scale = .2)
)
The kernel_normal
–the default kernel in the
MCMC
function–returns an object of class
fmcmc_kernel
. In principle, it consists of a list of two
functions that are used by the MCMC
routine:
proposal
, the proposal kernel function, and
logratio
, the function that returns the log of the
Metropolis-Hastings ratio. We will talk more about
fmcmc_kernel
objects later. Now, let’s look at the first
three variables of our model:
Better. Now, ideally we should only be using observations from the
stationary distribution. Let’s give it another try checking for
convergence every 1,000 steps using the
convergence_geweke
:
# to get reproducible results
set.seed(42)
out <- MCMC(
initial = beta.init,
fun = lupost,
nsteps = 1e4,
kernel = kernel_normal(scale = .2),
conv_checker = convergence_geweke(200)
)
## Convergence has been reached with 200 steps. avg Geweke's Z: 1.1854. (200 final count of samples).
A bit better. As announced by MCMC
, the
convergence_geweke
checker suggests that the chain reached
a stationary state. With this in hand, we can now rerun the algorithm
such that we start from the last couple of steps of the chain, this
time, without convergence monitoring as it is no longer necessary.
We will increase the number of steps (sample size), use two chains using parallel computing, and add some thinning to reduce autocorrelation:
# Now, we change the seed, so we get a different stream of
# pseudo random numbers
set.seed(112)
out_final <- MCMC(
initial = out, # Automagically takes the last 2 points
fun = lupost,
nsteps = 5e4, # Increasing the sample size
kernel = kernel_normal(scale = .2),
thin = 10,
nchains = 2L, # Running parallel chains
multicore = TRUE # in parallel.
)
Notice that, instead of specifying the starting points for each
chain, we passed the out
object to initial
. By
default, if initial
is of class mcmc
,
MCMC
will take the last nchains
points from
the chain as starting point for the new sequence. If
initial
is of class mcmc.list
, the number of
chains in initial
must match the nchains
parameter. We now see that the output posterior distribution appears to
be stationary
##
## Iterations = 10:50000
## Thinning interval = 10
## Number of chains = 2
## Sample size per chain = 5000
##
## 1. Empirical mean and standard deviation for each variable,
## plus standard error of the mean:
##
## Mean SD Naive SE Time-series SE
## par1 0.6575 0.3005 0.003005 0.004844
## par2 0.7998 0.3696 0.003696 0.006680
## par3 1.1719 0.3629 0.003629 0.006884
##
## 2. Quantiles for each variable:
##
## 2.5% 25% 50% 75% 97.5%
## par1 0.0924 0.4509 0.6445 0.8535 1.275
## par2 0.1084 0.5457 0.7846 1.0470 1.560
## par3 0.5181 0.9193 1.1533 1.4079 1.939
fmcmc_kernel
objects are environments that are passed to
the MCMC
function. While the MCMC
function
only returns the mcmc
class object (as defined in the
coda
package), users can exploit the fact that the kernel
objects are environments to reuse them or inspect them once the
MCMC
function returns. For example,
fmcmc_kernel
objects can be useful with adaptive kernels as
users can review the covariance structure or other components.
To illustrate this, let’s re-do the MCMC chain of the previous example but using an adaptive kernel instead, in particular, Haario’s 2010 adaptive metropolis.
The MCMC function will update the kernel at every step
(freq = 1
), and the adaptation will start from step 500
(warmup = 500
). We can see that some of its components
haven’t been initialized or have a default starting value before the
call of the MCMC
function:
## [1] 0
## NULL
Let’s see how it works:
set.seed(12)
out_haario_1 <- MCMC(
initial = out,
fun = lupost,
nsteps = 1000, # We will only run the chain for 100 steps
kernel = khaario, # We passed the predefined kernel
thin = 1, # No thining here
nchains = 1L, # A single chain
multicore = FALSE # Running in serial
)
Let’s inspect the output and mark when the adaptation starts:
traceplot(out_haario_1[,1], main = "Traceplot of the first parameter")
abline(v = 500, col = "red", lwd = 2, lty=2)
If we look at the khaario
kernel, the
fmcmc_kernel
object, we can see that things changed from
the first time we ran it
## [1] 999
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.018860119 -3.332717e-03 -0.008727212 -0.0325079868 6.073619e-03
## [2,] -0.003332717 1.329341e-02 0.014426084 -0.0004692044 -4.190311e-05
## [3,] -0.008727212 1.442608e-02 0.024491648 0.0098554654 -1.308236e-03
## [4,] -0.032507987 -4.692044e-04 0.009855465 0.1258067668 -1.195835e-02
## [5,] 0.006073619 -4.190311e-05 -0.001308236 -0.0119583520 1.953713e-02
If we re-run the chain, using as starting point the last step of the first run, we can also continue using the kernel object:
out_haario_2 <- MCMC(
initial = out_haario_1,
fun = lupost,
nsteps = 2000, # We will only run the chain for 2000 steps now
kernel = khaario, # Same as before, same kernel.
thin = 1,
nchains = 1L,
multicore = FALSE,
seed = 555 # We can also specify the seed in the MCMC function
)
Let’s see again how does everything looks like:
traceplot(out_haario_2[,1], main = "Traceplot of the first parameter")
abline(v = 500, col = "red", lwd = 2, lty=2)
As shown in the plot, since the warmup period already passed for the kernel object, the adaptation process is happening at every step, so we don’t see a big break at step 500 as before. Let’s see the counts and the covariance matrix and compare it with the previous one:
# Number of iterations (absolute count, the counts equal the number of steps)
# This will have 1000 (first run) + 2000 (second run) steps
khaario$abs_iter
## [1] 2998
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.07830456 0.00649061 0.035844799 -0.01054921 0.023544929
## [2,] 0.00649061 0.10794138 -0.016443314 -0.02806675 -0.020701377
## [3,] 0.03584480 -0.01644331 0.108915397 0.03737444 0.001673125
## [4,] -0.01054921 -0.02806675 0.037374444 0.19827548 -0.023571157
## [5,] 0.02354493 -0.02070138 0.001673125 -0.02357116 0.111933531
## [,1] [,2] [,3] [,4] [,5]
## [1,] -0.059444446 -0.009823328 -0.044572011 -0.02195878 -0.017471310
## [2,] -0.009823328 -0.094647970 0.030869398 0.02759754 0.020659473
## [3,] -0.044572011 0.030869398 -0.084423749 -0.02751898 -0.002981361
## [4,] -0.021958782 0.027597543 -0.027518978 -0.07246872 0.011612805
## [5,] -0.017471310 0.020659473 -0.002981361 0.01161280 -0.092396405
Things have changed since the last time we used the kernel, as
expected. Kernel objects in the fmcmc
package can also be
used with multiple chains and in parallel. The MCMC
function is smart enough to create independent copies of
fmcmc_kernel
objects when running multiple chains, and keep
the original kernel objects up-to-date even when using multiple cores to
run MCMC
. For more technical details on how
fmcmc_kernel
objects work see the manual
?fmcmc_kernel
or the vignette “User-defined kernels”
included in the package
vignette("user-defined-kernels", package = "fmcmc")
.
In some situations, you may want to access the computed unnormalized
log-posterior probabilities, the states proposed by the kernel, or other
process components. In those cases, the functions with the prefix
get_*
can help you.
Starting version 0.5-0, we replaced the family of functions
last_*
with get_*
; a re-design of this
“memory” component that gives users access to data generated during the
Markov process. After each run of the MCMC
function,
information regarding the last execution is stored in the environment
MCMC_OUTPUT
.
If you wanted to look at the logposterior of the last call and proposed states, you can do the following:
# Pretty figure showing proposed and accepted
plot(
get_draws()[,1:2], pch = 20, col = "gray",
main = "Haario's second run"
)
points(out_haario_2[,1:2], col = "red", pch = 20)
legend(
"topleft", legend = c("proposed", "accepted"),
col = c("gray", "red"),
pch = 20,
bty = "n"
)
The MCMC_OUTPUT
environment also contains the arguments
passed to MCMC()
:
## Markov Chain Monte Carlo (MCMC) output:
## Start = 1000
## End = 1000
## Thinning interval = 1
## par1 par2 par3 par4 par5
## 1000 0.03743564 0.8992076 0.726668 0.3899513 0.4633549
## function(beta) {
## eta <- as.numeric(x %*% beta)
## logp <- ifelse(eta < 0, eta - log1p(exp(eta)), - log1p(exp(- eta)))
## logq <- ifelse(eta < 0, - log1p(exp(eta)), - eta - log1p(exp(- eta)))
## logl <- sum(logp[y == 1]) + sum(logq[y == 0])
## return(logl - sum(beta^2) / 8)
## }
## <bytecode: 0x5649153b13c0>
## <environment: 0x56491458fe48>
##
## An environment of class fmcmc_kernel:
##
## Ik : num [1:5, 1:5] 1e-04 0e+00 0e+00 0e+00 0e+00 0e+00 1e-04 0e+00 0e+00 0e+00 ...
## Mean_t_prev : num [1, 1:5] 0.654 0.829 1.103 0.299 0.75
## Sd : num 1.15
## Sigma : num [1:5, 1:5] 0.0783 0.00649 0.03584 -0.01055 0.02354 ...
## abs_iter : int 2998
## bw : int 0
## eps : num 1e-04
## fixed : logi [1:5] FALSE FALSE FALSE FALSE FALSE
## freq : num 1
## k : int 5
## lb : num [1:5] -1.8e+308 -1.8e+308 -1.8e+308 -1.8e+308 -1.8e+308
## logratio : function (env)
## mu : num [1:5] 0 0 0 0 0
## proposal : function (env)
## ub : num [1:5] 1.8e+308 1.8e+308 1.8e+308 1.8e+308 1.8e+308
## until : num Inf
## warmup : num 500
## which. : int [1:5] 1 2 3 4 5
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.