The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The wARMASVp package provides closed-form estimation, simulation, hypothesis testing, filtering, and forecasting for higher-order stochastic volatility SV(p) models. It supports Gaussian, Student-t, and Generalized Error Distribution (GED) innovations, with optional leverage effects.
The stochastic volatility model of order \(p\) is:
\[y_t = \sigma_y \exp(w_t / 2)\, z_t\] \[w_t = \phi_1 w_{t-1} + \cdots + \phi_p w_{t-p} + \sigma_v v_t\]
where \(z_t\) is an i.i.d. innovation (Gaussian, Student-t, or GED) and \(v_t \sim N(0,1)\) drives the log-volatility.
library(wARMASVp)
set.seed(123)
# Simulate
sim <- sim_svp(2000, phi = 0.95, sigy = 1, sigv = 0.3)
y <- sim$y
# Estimate
fit <- svp(y, p = 1, J = 10)
summary(fit)
#>
#> SV(1) Model - W-ARMA-SV Estimation
#> --------------------------------------------------
#> Sample size: 2000
#> Winsorizing parameter J: 10
#> --------------------------------------------------
#> Parameter estimates:
#>
#> Parameter Estimate
#> phi_1 0.884123
#> sigma_y 1.023106
#> sigma_v 0.284464y2 <- sim_svp(2000, phi = c(0.20, 0.63), sigy = 1, sigv = 0.5)$y
fit2 <- svp(y2, p = 2, J = 10)
summary(fit2)
#>
#> SV(2) Model - W-ARMA-SV Estimation
#> --------------------------------------------------
#> Sample size: 2000
#> Winsorizing parameter J: 10
#> --------------------------------------------------
#> Parameter estimates:
#>
#> Parameter Estimate
#> phi_1 0.674179
#> phi_2 0.164600
#> sigma_y 1.027829
#> sigma_v 0.394262yt <- sim_svp(2000, phi = 0.90, sigy = 1, sigv = 0.3,
errorType = "Student-t", nu = 5)$y
fit_t <- svp(yt, p = 1, errorType = "Student-t")
summary(fit_t)
#>
#> SV(1) Model with Student-t Errors
#> --------------------------------------------------
#> Sample size: 2000
#> Winsorizing parameter J: 10
#> --------------------------------------------------
#> Parameter estimates:
#>
#> Parameter Estimate
#> phi_1 0.776965
#> sigma_y 1.092502
#> sigma_v 0.533378
#> nu 9.808555yg <- sim_svp(2000, phi = 0.90, sigy = 1, sigv = 0.3,
errorType = "GED", nu = 1.5)$y
fit_ged <- svp(yg, p = 1, errorType = "GED")
summary(fit_ged)
#>
#> SV(1) Model with GED Errors
#> --------------------------------------------------
#> Sample size: 2000
#> Winsorizing parameter J: 10
#> --------------------------------------------------
#> Parameter estimates:
#>
#> Parameter Estimate
#> phi_1 0.817401
#> sigma_y 0.981632
#> sigma_v 0.289408
#> nu 1.324258When return and volatility shocks are correlated (\(\rho \neq 0\)), use the leverage option:
sim_lev <- sim_svp(2000, phi = 0.95, sigy = 1, sigv = 0.3,
leverage = TRUE, rho = -0.5)
fit_lev <- svp(sim_lev$y, p = 1, leverage = TRUE)
summary(fit_lev)
#>
#> SVL(1) Model - W-ARMA-SV Estimation
#> --------------------------------------------------
#> Sample size: 2000
#> Winsorizing parameter J: 10
#> Leverage correlation type: pearson
#> --------------------------------------------------
#> Parameter estimates:
#>
#> Parameter Estimate
#> phi_1 0.876343
#> sigma_y 1.036191
#> sigma_v 0.465269
#> rho -0.456356
#>
#> gamma_tilde: 1.750608Leverage is supported for all three distributions:
sim_lev_t <- sim_svp(2000, phi = 0.90, sigy = 1, sigv = 0.3,
errorType = "Student-t", nu = 5,
leverage = TRUE, rho = -0.5)
fit_lev_t <- svp(sim_lev_t$y, p = 1, errorType = "Student-t", leverage = TRUE)
summary(fit_lev_t)
#>
#> SVL(1) Model with Student-t Errors
#> --------------------------------------------------
#> Sample size: 2000
#> Winsorizing parameter J: 10
#> Leverage correlation type: pearson
#> --------------------------------------------------
#> Parameter estimates:
#>
#> Parameter Estimate
#> phi_1 0.733567
#> sigma_y 1.028261
#> sigma_v 0.608399
#> nu 10.792081
#> rho -0.372807
#>
#> gamma_tilde: 1.389279The package provides Local Monte Carlo (LMC) and Maximized Monte Carlo (MMC) tests based on Dufour (2006).
Test whether SV(1) is sufficient versus SV(2):
y_test <- sim_svp(2000, phi = 0.95, sigy = 1, sigv = 0.3)$y
# H0: SV(1) vs H1: SV(2) — should not reject
test_ar <- lmc_ar(y_test, p_null = 1, p_alt = 2, N = 49)
print(test_ar)
#> LMC AR Order (p0=1 vs p=2) Test
#> ----------------------------------------
#> H0: phi_2 = 0
#> Test statistic (LR): 907.3842
#> p-value: 0.0600
#> MC replications: 49Test for heavy tails against a specific null value of the tail parameter:
# Test H0: nu = 10 (mild tails) on Student-t data with true nu = 5
test_t <- lmc_t(yt, nu_null = 10, N = 49, Amat = "Weighted")
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at lower boundary (2.01); extremely heavy tails.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
print(test_t)
#> LMC Student-t Test
#> ----------------------------------------
#> H0: nu = 10
#> Test statistic (LR): 0.0013
#> p-value: 0.9800
#> MC replications: 49
# Directional test: H1: nu < 10 (heavier tails than null)
test_t_dir <- lmc_t(yt, nu_null = 10, N = 49, Amat = "Weighted", direction = "less")
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
#> Warning in .svp_t(y, p, J, del, wDecay, logNu, sigvMethod, winsorize_eps):
#> Estimated nu at upper boundary (500); tails indistinguishable from Gaussian.
print(test_t_dir)
#> LMC Student-t Test [less]
#> ----------------------------------------
#> H0: nu = 10
#> Test statistic (LR): 0.0013
#> Signed root (S_T): -0.0364
#> p-value: 0.2800
#> MC replications: 49In addition to the LMC/MMC pairwise AR-order test above, the package
selects the SV(p) lag order by information criteria.
svp_IC() computes the criteria for a single fitted model;
svp_AR_order() sweeps over p = 1, ..., pmax
and reports the argmin for each criterion.
fit_ic <- svp(y_test, p = 2, J = 10)
svp_IC(fit_ic)
#> BIC_Kalman AIC_Kalman BIC_HR AIC_HR
#> 4806.224 4783.820 3164.128 3136.173sel <- svp_AR_order(y_test, pmax = 4, J = 10)
sel$argmin
#> BIC_Kalman AIC_Kalman BIC_HR AIC_HR
#> 1 1 1 3Four criteria are returned by default, spanning two estimation
families and two penalty philosophies: BIC_Kalman /
AIC_Kalman use the QML log-likelihood from the Gaussian
mixture Kalman filter, while BIC_HR / AIC_HR
use a two-stage Hannan-Rissanen ARMA(p, p) residual variance. Additional
criteria (AICc_Kalman, BIC_Whittle, and the
Yule-Walker variants) are available opt-in via the criteria
argument. Both functions read errorType and
leverage from the fitted model, so heavy-tailed and
leverage specifications are handled automatically. See Ahsan, Dufour,
and Rodriguez-Rondon (2026) for the theoretical motivation and
consistency simulations.
Three methods are available via filter_svp(), which
takes a fitted model:
# Fit model
fit_filt <- svp(y, p = 1, J = 10)
# GMKF (recommended)
filt <- filter_svp(fit_filt, method = "mixture")
plot(filt)Multi-step ahead volatility forecasts using Kalman filtering. Pass a
fitted model object from svp():
Output scales can be chosen: "log-variance" (default),
"variance", or "volatility". All three are
always computed and stored.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.