The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The mvnfast
R package provides computationally efficient tools related to the multivariate normal and Student's t distributions. The tools are generally faster than those provided by other packages, thanks to the use of C++ code through the Rcpp
\RcppArmadillo
packages and parallelization through the OpenMP
API. The most important functions are:
rmvn()
: simulates multivariate normal random vectors.rmvt()
: simulates Student's t normal random vectors.dmvn()
: evaluates the probability density function of a multivariate normal distribution.dmvt()
: evaluates the probability density function of a multivariate Student's t distribution.maha()
: evaluates mahalanobis distances.In the following sections we will benchmark each function against equivalent functions provided by other packages, while in the final section we provide an example application.
Simulating multivariate normal random variables is an essential step in many Monte Carlo algorithms (such as MCMC or Particle Filters), hence this operations has to be as fast as possible. Here we compare the rmvn
function with the equivalent function rmvnorm
(from the mvtnorm
package) and mvrnorm
(from the MASS
package). In particular, we simulate \(10^4\) twenty-dimensional random vectors:
# microbenchmark does not work on all platforms, hence we need this small wrapper
microwrapper <- function(..., times = 100L){
ok <- "microbenchmark" %in% rownames(installed.packages())
if( ok ){
library("microbenchmark")
microbenchmark(list = match.call(expand.dots = FALSE)$..., times = times)
}else{
message("microbenchmark package is not installed")
return( invisible(NULL) )
}
}
library("mvtnorm")
library("mvnfast")
library("MASS")
# We might also need to turn off BLAS parallelism
library("RhpcBLASctl")
blas_set_num_threads(1)
N <- 10000
d <- 20
# Creating mean and covariance matrix
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
microwrapper(rmvn(N, mu, mcov, ncores = 2),
rmvn(N, mu, mcov),
rmvnorm(N, mu, mcov),
mvrnorm(N, mu, mcov))
## Unit: milliseconds
## expr min lq mean median
## rmvn(N, mu, mcov, ncores = 2) 6.233332 7.623007 9.877648 9.230621
## rmvn(N, mu, mcov) 4.876387 5.113679 5.891418 5.199694
## rmvnorm(N, mu, mcov) 15.013945 15.678496 17.307010 16.082895
## mvrnorm(N, mu, mcov) 14.167729 14.631768 16.659310 14.905134
## uq max neval cld
## 12.105558 17.42471 100 b
## 5.786156 11.00759 100 a
## 18.230402 26.61021 100 c
## 17.054756 45.16447 100 c
In this example rmvn
cuts the computational time, relative to the alternatives, even when a single core is used. This gain is attributable to several factors: the use of C++ code and efficient numerical algorithms to simulate the random variables. Parallelizing the computation on two cores gives another appreciable speed-up. To be fair, it is necessary to point out that rmvnorm
and mvrnorm
have many more safety check on the user's input than rmvn
. This is true also for the functions described in the next sections.
Notice that this function does not use one of the Random Number Generators (RNGs) provided by R, but one of the parallel cryptographic RNGs described in (Salmon et al., 2011). It is important to point out that this RNG can safely be used in parallel, without risk of collisions between parallel sequence of random numbers, as detailed in the above reference.
We get similar performance gains when we simulate multivariate Student's t random variables:
# Here we have a conflict between namespaces
microwrapper(mvnfast::rmvt(N, mu, mcov, df = 3, ncores = 2),
mvnfast::rmvt(N, mu, mcov, df = 3),
mvtnorm::rmvt(N, delta = mu, sigma = mcov, df = 3))
## Unit: milliseconds
## expr min lq
## mvnfast::rmvt(N, mu, mcov, df = 3, ncores = 2) 7.252853 9.351978
## mvnfast::rmvt(N, mu, mcov, df = 3) 6.740425 6.982296
## mvtnorm::rmvt(N, delta = mu, sigma = mcov, df = 3) 17.338469 18.210757
## mean median uq max neval cld
## 14.30661 13.715540 16.383253 134.71269 100 b
## 7.95352 7.102894 8.030807 15.90466 100 a
## 20.75432 20.638911 22.107275 32.54279 100 c
When d
and N
are large, and rmvn
or rmvt
are called several times with the same arguments, it would make sense to create the matrix where to store the simulated random variable upfront. This can be done as follows:
A <- matrix(nrow = N, ncol = d)
class(A) <- "numeric" # This is important. We need the elements of A to be of class "numeric".
rmvn(N, mu, mcov, A = A)
Notice that here rmvn
returns NULL
, not the simulated random vectors! These can be found in the matrix provided by the user:
A[1:2, 1:5]
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1.994972 -7.880826 7.527710 10.446127 -1.820121
## [2,] -6.380186 -1.795572 1.990479 6.097603 -5.958737
Pre-creating the matrix of random variables saves some more time:
microwrapper(rmvn(N, mu, mcov, ncores = 2, A = A),
rmvn(N, mu, mcov, ncores = 2),
times = 200)
## Unit: milliseconds
## expr min lq mean median
## rmvn(N, mu, mcov, ncores = 2, A = A) 6.734420 9.141578 10.32735 10.11410
## rmvn(N, mu, mcov, ncores = 2) 7.290707 10.670795 12.07606 11.79768
## uq max neval cld
## 11.08513 20.18090 200 a
## 13.21262 20.92484 200 b
Don't look at the median time here, the mean is much more affected by memory re-allocation.
Here we compare the dmvn
function, which evaluates the multivariate normal density, with the equivalent function dmvtnorm
(from the mvtnorm
package). In particular we evaluate the log-density of \(10^4\) twenty-dimensional random vectors:
# Generating random vectors
N <- 10000
d <- 20
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
X <- rmvn(N, mu, mcov)
microwrapper(dmvn(X, mu, mcov, ncores = 2, log = T),
dmvn(X, mu, mcov, log = T),
dmvnorm(X, mu, mcov, log = T), times = 500)
## Unit: milliseconds
## expr min lq mean median
## dmvn(X, mu, mcov, ncores = 2, log = T) 1.372849 1.517788 2.189682 1.682904
## dmvn(X, mu, mcov, log = T) 2.313683 2.495948 2.644838 2.571540
## dmvnorm(X, mu, mcov, log = T) 2.554647 2.801858 3.873072 2.892224
## uq max neval cld
## 2.664766 6.160537 500 a
## 2.680989 5.887577 500 a
## 3.192614 124.843569 500 b
Again, we get some speed-up using C++ code and some more from the parallelization. We get similar results if we use a multivariate Student's t density:
# We have a namespace conflict
microwrapper(mvnfast::dmvt(X, mu, mcov, df = 4, ncores = 2, log = T),
mvnfast::dmvt(X, mu, mcov, df = 4, log = T),
mvtnorm::dmvt(X, delta = mu, sigma = mcov, df = 4, log = T), times = 500)
## Unit: milliseconds
## expr min lq
## mvnfast::dmvt(X, mu, mcov, df = 4, ncores = 2, log = T) 4.445519 5.651258
## mvnfast::dmvt(X, mu, mcov, df = 4, log = T) 2.514367 2.798483
## mvtnorm::dmvt(X, delta = mu, sigma = mcov, df = 4, log = T) 2.780615 3.114415
## mean median uq max neval cld
## 7.111869 6.483089 8.198872 56.706988 500 c
## 3.703680 3.069122 4.179680 8.958852 500 a
## 5.244806 3.790590 5.974023 169.866791 500 b
Finally, we compare the maha
function, which evaluates the square mahalanobis distance with the equivalent function mahalanobis
(from the stats
package). Also in the case we use \(10^4\) twenty-dimensional random vectors:
# Generating random vectors
N <- 10000
d <- 20
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
X <- rmvn(N, mu, mcov)
microwrapper(maha(X, mu, mcov, ncores = 2),
maha(X, mu, mcov),
mahalanobis(X, mu, mcov))
## Unit: milliseconds
## expr min lq mean median uq
## maha(X, mu, mcov, ncores = 2) 1.283833 1.420886 2.127962 1.696832 2.672341
## maha(X, mu, mcov) 2.180412 2.313310 2.402774 2.384743 2.460586
## mahalanobis(X, mu, mcov) 4.044417 4.238780 5.036430 4.333448 6.342097
## max neval cld
## 6.650898 100 a
## 3.272970 100 a
## 9.357521 100 b
The acceleration is similar to that obtained in the previous sections.
As an example application of the dmvn
function, we implemented the mean-shift mode seeking algorithm. This procedure can be used to find the mode or maxima of a kernel density function, and it can be used to set up clustering algorithms. Here we simulate \(10^4\) d-dimensional random vectors from mixture of normal distributions:
set.seed(5135)
N <- 10000
d <- 2
mu1 <- c(0, 0); mu2 <- c(2, 3)
Cov1 <- matrix(c(1, 0, 0, 2), 2, 2)
Cov2 <- matrix(c(1, -0.9, -0.9, 1), 2, 2)
bin <- rbinom(N, 1, 0.5)
X <- bin * rmvn(N, mu1, Cov1) + (!bin) * rmvn(N, mu2, Cov2)
Finally, we plot the resulting probability density and, starting from 10 initial points, we use mean-shift to converge to the nearest mode:
# Plotting
np <- 100
xvals <- seq(min(X[ , 1]), max(X[ , 1]), length.out = np)
yvals <- seq(min(X[ , 2]), max(X[ , 2]), length.out = np)
theGrid <- expand.grid(xvals, yvals)
theGrid <- as.matrix(theGrid)
dens <- dmixn(theGrid,
mu = rbind(mu1, mu2),
sigma = list(Cov1, Cov2),
w = rep(1, 2)/2)
plot(X[ , 1], X[ , 2], pch = '.', lwd = 0.01, col = 3)
contour(x = xvals, y = yvals, z = matrix(dens, np, np),
levels = c(0.002, 0.01, 0.02, 0.04, 0.08, 0.15 ), add = TRUE, lwd = 2)
# Mean-shift
library(plyr)
inits <- matrix(c(-2, 2, 0, 3, 4, 3, 2, 5, 2, -3, 2, 2, 0, 2, 3, 0, 0, -4, -2, 6),
10, 2, byrow = TRUE)
traj <- alply(inits,
1,
function(input)
ms(X = X,
init = input,
H = 0.05 * cov(X),
ncores = 2,
store = TRUE)$traj
)
invisible( lapply(traj,
function(input){
lines(input[ , 1], input[ , 2], col = 2, lwd = 1.5)
points(tail(input[ , 1]), tail(input[ , 2]))
}))
As we can see from the plot, each initial point leads one of two points that are very close to the true mode. Notice that the bandwidth for the kernel density estimator was chosen by trial-and-error, and less arbitrary choices are certainly possible in real applications.
Dirk Eddelbuettel and Romain Francois (2011). Rcpp: Seamless R and C++ Integration. Journal of Statistical Software, 40(8), 1-18. URL https://www.jstatsoft.org/v40/i08/.
Eddelbuettel, Dirk (2013) Seamless R and C++ Integration with Rcpp. Springer, New York. ISBN 978-1-4614-6867-7.
Dirk Eddelbuettel, Conrad Sanderson (2014). RcppArmadillo: Accelerating R with high-performance C++ linear algebra. Computational Statistics and Data Analysis, Volume 71, March 2014, pages 1054-1063. URL https://dx.doi.org/10.1016/j.csda.2013.02.005
John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw (2011). Parallel Random Numbers: As Easy as 1, 2, 3. D. E. Shaw Research, New York, NY 10036, USA.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.