The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The package bvartools
implements functions for Bayesian
inference of linear vector autoregressive (VAR) models. It separates a
typical BVAR analysis workflow into multiple steps:
In each step researchers can fine-tune a model according to their specific requirements or they can simply use the default framework for commonly used models and priors. Since version 0.1.0 the package comes with posterior simulation functions that do not require to implement any further simulation algorithms. For Bayesian inference of stationary VAR models the package covers
For Bayesian inference of cointegrated VAR models the package implements the algorithm of Koop, León-González and Strachan (2010) [KLS] – which places identification restrictions on the cointegration space – in the following variants
For Bayesian inference of dynamic factor models the package implements the althorithm used in the textbook of Chan, Koop, Poirer and Tobias (2019).
This introduction to bvartools
provides the code to set
up and estimate a basic Bayesian VAR (BVAR) model.1 The first part covers
a basic workflow, where the standard posterior simulation algorithm of
the package is employed for Bayesian inference. The second part presents
a workflow for a posterior algorithm as it could be implemented by a
researcher.
For both illustrations the data set E1 from Lütkepohl (2006) is used. It contains data on West German fixed investment, disposable income and consumption expenditures in billions of DM from 1960Q1 to 1982Q4. Like in the textbook only the first 73 observations of the log-differenced series are used.
bvartools
with built-in algorithmsThe gen_var
function produces an object, which contains
information on the specification of the VAR model that should be
estimated. The following code specifies a VAR(2) model with an intercept
term. The number of iterations and burn-in draws is already specified at
this stage.
Note that the function is also capable of generating more than one
model. For example, specifying p = 0:2
would result in
three models.
Function add_priors
produces priors for the specified
model(s) in object model
and augments the object
accordingly.
model_with_priors <- add_priors(model,
coef = list(v_i = 0, v_i_det = 0),
sigma = list(df = 1, scale = .0001))
If researchers want to fine-tune individual prior specifications,
this can be done by directly accessing the respective elements in object
model_with_priors
.
Function draw_posterior
can be used to produce posterior
draws for a model.
If researchers prefer to use their own posterior algorithms, this can
be done by specifying argument FUN
with a function that
uses obejct model_with_priors
as its input. Its output is
an object of class bvar
(see below).
If multiple models should be estimated, the function allows to make
use of parallel computing by specifying argument
mc.cores
.
Posterior draws can be visually inspected by using the
plot
function. By default, it produces a series of
histograms of all estimated coefficients.
Alternatively, the trace plot of the post-burnin draws can be draws
by adding the argument type = "trace"
:
Summary statistics can be obtained in the usual way using the
summary
method.
summary(bvar_est)
#>
#> Bayesian VAR model with p = 2
#>
#> Model:
#>
#> y ~ invest.01 + income.01 + cons.01 + invest.02 + income.02 + cons.02 + const
#>
#> Variable: invest
#>
#> Mean SD Naive SD Time-series SD 2.5% 50% 97.5%
#> invest.01 -0.3177 0.1272 0.001799 0.001799 -0.5629 -0.3184 -0.06951
#> income.01 0.1404 0.5646 0.007984 0.007984 -0.9677 0.1422 1.25489
#> cons.01 0.9490 0.6782 0.009592 0.009592 -0.3808 0.9567 2.28535
#> invest.02 -0.1584 0.1287 0.001819 0.001777 -0.4092 -0.1582 0.09420
#> income.02 0.1173 0.5442 0.007697 0.007697 -0.9530 0.1210 1.18820
#> cons.02 0.9281 0.6697 0.009471 0.009471 -0.3749 0.9262 2.22541
#> const -1.6408 1.7548 0.024816 0.025436 -5.1170 -1.6347 1.77959
#>
#> Variable: income
#>
#> Mean SD Naive SD Time-series SD 2.5% 50% 97.5%
#> invest.01 0.04430 0.03277 0.0004634 0.0004634 -0.01955 0.044591 0.1080
#> income.01 -0.15217 0.14146 0.0020005 0.0020037 -0.43223 -0.152854 0.1256
#> cons.01 0.28550 0.17169 0.0024281 0.0023176 -0.05666 0.287489 0.6272
#> invest.02 0.05023 0.03199 0.0004524 0.0004524 -0.01071 0.050025 0.1138
#> income.02 0.01978 0.13871 0.0019617 0.0020338 -0.25406 0.019158 0.2917
#> cons.02 -0.01107 0.17448 0.0024675 0.0024675 -0.36029 -0.008363 0.3280
#> const 1.58037 0.45022 0.0063671 0.0063671 0.69080 1.579001 2.4719
#>
#> Variable: cons
#>
#> Mean SD Naive SD Time-series SD 2.5% 50%
#> invest.01 -0.002187 0.02657 0.0003758 0.0003672 -0.05390 -0.002386
#> income.01 0.223511 0.11334 0.0016028 0.0016028 -0.00156 0.224038
#> cons.01 -0.266278 0.13947 0.0019724 0.0019724 -0.54689 -0.265700
#> invest.02 0.033904 0.02621 0.0003706 0.0003706 -0.01637 0.033556
#> income.02 0.354345 0.11156 0.0015777 0.0015777 0.13873 0.352795
#> cons.02 -0.020969 0.13952 0.0019731 0.0019731 -0.29306 -0.022391
#> const 1.296171 0.35992 0.0050901 0.0050901 0.58839 1.298678
#> 97.5%
#> invest.01 0.050089
#> income.01 0.443615
#> cons.01 0.005841
#> invest.02 0.084808
#> income.02 0.573854
#> cons.02 0.255691
#> const 2.006986
#>
#> Variance-covariance matrix:
#>
#> Mean SD Naive SD Time-series SD 2.5% 50% 97.5%
#> invest_invest 22.2345 4.0323 0.057025 0.062905 15.6751 21.7825 31.127
#> invest_income 0.7579 0.7379 0.010435 0.011956 -0.6325 0.7303 2.313
#> invest_cons 1.3091 0.6129 0.008668 0.009580 0.2022 1.2734 2.644
#> income_invest 0.7579 0.7379 0.010435 0.011956 -0.6325 0.7303 2.313
#> income_income 1.4426 0.2656 0.003756 0.004131 1.0158 1.4095 2.046
#> income_cons 0.6423 0.1712 0.002421 0.002639 0.3600 0.6225 1.030
#> cons_invest 1.3091 0.6129 0.008668 0.009580 0.2022 1.2734 2.644
#> cons_income 0.6423 0.1712 0.002421 0.002639 0.3600 0.6225 1.030
#> cons_cons 0.9308 0.1685 0.002384 0.002509 0.6628 0.9087 1.311
As expected for an algrotihm with uninformative priors the posterior means are fairly close to the results of the frequentist estimator, which can be obtaind in the following way:
# Obtain data for LS estimator
y <- t(model$data$Y)
z <- t(model$data$Z)
# Calculate LS estimates
A_freq <- tcrossprod(y, z) %*% solve(tcrossprod(z))
# Round estimates and print
round(A_freq, 3)
#> invest.01 income.01 cons.01 invest.02 income.02 cons.02 const
#> invest -0.320 0.146 0.961 -0.161 0.115 0.934 -1.672
#> income 0.044 -0.153 0.289 0.050 0.019 -0.010 1.577
#> cons -0.002 0.225 -0.264 0.034 0.355 -0.022 1.293
The MCMC series in object est_bvar
can be thinned
using
Forecasts with credible bands can be obtained with the function
predict
. If the model contains deterministic terms, new
values can be provided in the argument new_d
. If no values
are provided, the function sets them to zero. The number of rows of
new_d
must be the same as the argument
n.ahead
.
bvartools
supports commonly used impulse response
functions. See https://www.r-econometrics.com/timeseries/irf/
for an introduction.
bvartools
also supports forecast error variance
decomposition (FEVD) and generalised forecast error variance
decomposition.
It is also possible to calculate FEVDs, which are based on
generalised impulse responses (GIR). Note that these do not
automatically add up to unity. However, this could be changed by adding
normalise_gir = TRUE
to the function’s arguments.
bvartools
with user-written algorithmsbvartools
was created to assist researchers in building
and evaluating their own posterior simulation algorithms for linear BVAR
models. Functions gen_var
and add_priors
simply help to quickly obtain the relevant data matrices for posterior
simulation. Estimation can be done using algortihms, which are usually
implemented by the researchers themselves. But once posterior draws are
obtained bvartools
can assist throughout in the following
steps of the analysis. In this context the main contributions of the
package are:
bvar
and bvec
collect the output
of a Gibbs sampler in standardised objects, which can be used for
subsequent steps in an analysis.predict
, irf
,
fevd
for forecasting, impulse response analysis and
forecast error variance decomposition, respectively, use the output of
bvar
and, hence, researchers do not have to implement them
themselves and can save time.RcppArmadillo
package of Eddelbuettel and Sanderson (2014).2 This decreases
calculation time and makes the code less complex and, thus, less prone
to mistakes.If researchers are willing to rely on the model generation and
evaluation functions of bvartools
, the only remaing step is
to illustrate how user specific algorithms can be combined with the
functional framework of the package. This is shown in the remainder of
this introduction.
These steps are exactly the same as described above. Thus, the
following Gibbs sampler departs from object
model_with_priors
from above.
# Reset random number generator for reproducibility
set.seed(1234567)
# Get data matrices
y <- t(model_with_priors$data$Y)
x <- t(model_with_priors$data$Z)
tt <- ncol(y) # Number of observations
k <- nrow(y) # Number of endogenous variables
m <- k * nrow(x) # Number of estimated coefficients
# Priors for coefficients
a_mu_prior <- model_with_priors$priors$coefficients$mu # Prior means
a_v_i_prior <- model_with_priors$priors$coefficients$v_i # Prior precisions
# Priors for error variance-covariance matrix
u_sigma_df_prior <- model_with_priors$priors$sigma$df # Prior degrees of freedom
u_sigma_scale_prior <- model_with_priors$priors$sigma$scale # Prior covariance matrix
u_sigma_df_post <- tt + u_sigma_df_prior # Posterior degrees of freedom
# Initial values for variance-covariance matrix
u_sigma <- diag(.00001, k)
u_sigma_i <- solve(u_sigma)
# Number of iterations of the Gibbs sampler
iterations <- model_with_priors$model$iterations
# Number of burn-in draws
burnin <- model_with_priors$model$burnin
# Total number of draws
draws <- iterations + burnin
# Storate for posterior draws
draws_a <- matrix(NA, m, iterations)
draws_sigma <- matrix(NA, k^2, iterations)
# Start Gibbs sampler
for (draw in 1:draws) {
# Draw conditional mean parameters
a <- post_normal(y, x, u_sigma_i, a_mu_prior, a_v_i_prior)
# Draw variance-covariance matrix
u <- y - matrix(a, k) %*% x # Obtain residuals
u_sigma_scale_post <- solve(u_sigma_scale_prior + tcrossprod(u))
u_sigma_i <- matrix(rWishart(1, u_sigma_df_post, u_sigma_scale_post)[,, 1], k)
# Store draws
if (draw > burnin) {
draws_a[, draw - burnin] <- a
draws_sigma[, draw - burnin] <- solve(u_sigma_i) # Invert Sigma_i to obtain Sigma
}
}
bvar
objectsThe bvar
function can be used to collect relevant output
of the Gibbs sampler into a standardised object, which can be used by
functions such as predict
to obtain forecasts or
irf
for impulse respons analysis.
bvar_est_two <- bvar(y = model_with_priors$data$Y,
x = model_with_priors$data$Z,
A = draws_a[1:18,],
C = draws_a[19:21, ],
Sigma = draws_sigma)
Since the output of function draw_posterior
is an object
of class bvar
, the calculation of summary statistics,
forecasts, impulse responses and forecast error variance decompositions
is performed as described above.
summary(bvar_est_two)
#>
#> Bayesian VAR model with p = 2
#>
#> Model:
#>
#> y ~ invest.01 + income.01 + cons.01 + invest.02 + income.02 + cons.02 + const
#>
#> Variable: invest
#>
#> Mean SD Naive SD Time-series SD 2.5% 50% 97.5%
#> invest.01 -0.3171 0.1287 0.001820 0.001781 -0.5727 -0.3161 -0.06312
#> income.01 0.1465 0.5596 0.007914 0.007914 -0.9499 0.1503 1.24997
#> cons.01 0.9576 0.6908 0.009770 0.009770 -0.3801 0.9519 2.30291
#> invest.02 -0.1599 0.1263 0.001787 0.001787 -0.4050 -0.1609 0.08937
#> income.02 0.1128 0.5458 0.007719 0.007719 -0.9591 0.1133 1.18511
#> cons.02 0.9202 0.6874 0.009721 0.009721 -0.4190 0.9364 2.28972
#> const -1.6451 1.7712 0.025049 0.025049 -5.1131 -1.6536 1.82543
#>
#> Variable: income
#>
#> Mean SD Naive SD Time-series SD 2.5% 50% 97.5%
#> invest.01 0.04373 0.03269 0.0004623 0.0004623 -0.02191 0.04371 0.1069
#> income.01 -0.15152 0.14307 0.0020233 0.0019753 -0.43908 -0.14968 0.1253
#> cons.01 0.28606 0.17319 0.0024492 0.0024492 -0.04684 0.28727 0.6285
#> invest.02 0.05001 0.03239 0.0004580 0.0004580 -0.01478 0.04986 0.1129
#> income.02 0.02069 0.13831 0.0019559 0.0019559 -0.24879 0.01900 0.2964
#> cons.02 -0.01443 0.17287 0.0024448 0.0024448 -0.34814 -0.01354 0.3203
#> const 1.58537 0.45301 0.0064065 0.0064065 0.70491 1.58560 2.4734
#>
#> Variable: cons
#>
#> Mean SD Naive SD Time-series SD 2.5% 50%
#> invest.01 -0.003013 0.02659 0.0003761 0.0003761 -0.056220 -0.002736
#> income.01 0.223180 0.11491 0.0016250 0.0016250 0.001509 0.224370
#> cons.01 -0.262224 0.13898 0.0019655 0.0020287 -0.538874 -0.257616
#> invest.02 0.034337 0.02588 0.0003660 0.0003660 -0.018436 0.034896
#> income.02 0.355342 0.11217 0.0015863 0.0015863 0.133435 0.357047
#> cons.02 -0.025449 0.13903 0.0019662 0.0019662 -0.297178 -0.025842
#> const 1.297327 0.36338 0.0051389 0.0051389 0.607255 1.287345
#> 97.5%
#> invest.01 0.049540
#> income.01 0.448329
#> cons.01 0.002043
#> invest.02 0.084839
#> income.02 0.572082
#> cons.02 0.254113
#> const 2.013921
#>
#> Variance-covariance matrix:
#>
#> Mean SD Naive SD Time-series SD 2.5% 50% 97.5%
#> invest_invest 22.3994 4.0777 0.057668 0.065232 15.9104 21.8679 31.819
#> invest_income 0.7388 0.7260 0.010267 0.011516 -0.6131 0.7156 2.268
#> invest_cons 1.2950 0.5977 0.008452 0.009363 0.2147 1.2703 2.534
#> income_invest 0.7388 0.7260 0.010267 0.011516 -0.6131 0.7156 2.268
#> income_income 1.4395 0.2677 0.003786 0.004205 1.0158 1.4042 2.057
#> income_cons 0.6422 0.1690 0.002391 0.002679 0.3537 0.6262 1.017
#> cons_invest 1.2950 0.5977 0.008452 0.009363 0.2147 1.2703 2.534
#> cons_income 0.6422 0.1690 0.002391 0.002679 0.3537 0.6262 1.017
#> cons_cons 0.9352 0.1678 0.002373 0.002649 0.6597 0.9183 1.318
Chan, J., Koop, G., Poirier, D. J., & Tobias, J. L. (2019). Bayesian Econometric Methods (2nd ed.). Cambridge: University Press.
Eddelbuettel, D., & Sanderson C. (2014). RcppArmadillo: Accelerating R with high-performance C++ linear algebra. Computational Statistics and Data Analysis, 71, 1054-1063. https://doi.org/10.1016/j.csda.2013.02.005
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580. https://doi.org/10.1016/j.jeconom.2007.08.017
Kim, S., Shephard, N., & Chib, S. (1998). Stochastic volatility: Likelihood inference and comparison with ARCH models. Review of Economic Studies 65(3), 361-396.
Koop, G., León-González, R., & Strachan R. W. (2010). Efficient posterior simulation for cointegrated models with priors on the cointegration space. Econometric Reviews, 29(2), 224-242. https://doi.org/10.1080/07474930903382208
Koop, G., León-González, R., & Strachan R. W. (2011). Bayesian inference in a time varying cointegration model. Journal of Econometrics, 165(2), 210-220. https://doi.org/10.1016/j.jeconom.2011.07.007
Koop, G., Pesaran, M. H., & Potter, S.M. (1996). Impulse response analysis in nonlinear multivariate models. Journal of Econometrics 74(1), 119-147. https://doi.org/10.1016/0304-4076(95)01753-4
Korobilis, D. (2013). VAR forecasting using Bayesian variable selection. Journal of Applied Econometrics, 28(2), 204-230. https://doi.org/10.1002/jae.1271
Lütkepohl, H. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Pesaran, H. H., & Shin, Y. (1998). Generalized impulse response analysis in linear multivariate models. Economics Letters, 58(1), 17-29. https://doi.org/10.1016/S0165-1765(97)00214-0
Sanderson, C., & Curtin, R. (2016). Armadillo: a template-based C++ library for linear algebra. Journal of Open Source Software, 1(2), 26. https://doi.org/10.21105/joss.00026
Further examples about the use of the
bvartools
package are available at https://www.r-econometrics.com/timeseriesintro/.↩︎
RcppArmadillo
is the Rcpp
bridge to the open source ‘Armadillo’ library of Sanderson and Curtin
(2016).↩︎
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.