The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
This document provides worked examples of Kalman filtering and smoothing using the 'fkf.SP' and 'fks.SP' functions of the 'FKF.SP' package. Sequential processing of the Kalman filter algorithm benefits from substantial and significant decreases in computation time. However, it requires the additional assumption that the variance of the measurement equation / observations are independent. This assumption allows filtering to be performed through a sequential processing method (i.e. a univariate treatment of the multivariate process) - increasing computational efficiency. Sequential processing has been empirically shown to increase linearly, rather than exponentially, with increasing dimensionality of the observation matrix 'yt' (Aspinall et al., 2022, P104). Sequential processing is therefore a recommended method if there are many observations being filtered, there are NA's in the observation matrix, and the assumption of independence in observations can be adopted.
The 'fkf' function of the package 'FKF' (Fast Kalman Filter) is a function call of the traditional Kalman filter algorithm that is designed to maximize computational efficiency of the traditional filtering process. This vignette compares the computation times between these two Kalman filter algorithms.
This vignette provides five worked examples, comparing the computational efficiencies of the 'fkf' and 'fkf.SP' functions for maximum likelihood estimation (MLE). The first three examples were first presented within the associated vignette of the 'FKF' package, with the fourth being unique to the vignette. The fifth example provides a worked example of Kalman filtering and smoothing to a dataset. As well as the increase in processing time generated by the 'fkf.SP' function, this vignette further presents and explains the erroneous difference in log-likelihood values returned by the 'fkf' and 'fkf.SP' functions when there are missing observations (i.e. NA's are present within argument 'yt').
##The packages 'FKF', 'KFAS','stats' and 'NFCP' are required for this Vignette:
library(FKF.SP)
library(FKF)
library(stats)
library(NFCP)
Autoregression moving average models can be estimated through Kalman filtering. See also help(makeARIMA) and help(KalmanRun).
Step 1 - Sample from an ARMA(2, 1) process through the 'stats' package to simulate observations:
# Set constants:
## Length of series
n <- 10000
## AR parameters
AR <- c(ar1 = 0.6, ar2 = 0.2, ma1 = -0.2, sigma = sqrt(0.2))
# Generate observations:
set.seed(1)
a <- stats::arima.sim(model = list(ar = AR[c("ar1", "ar2")], ma = AR["ma1"]), n = n,
innov = rnorm(n) * AR["sigma"])
Step 2 - Create a state space representation of the four ARMA parameters:
arma21ss <- function(ar1, ar2, ma1, sigma) {
Tt <- matrix(c(ar1, ar2, 1, 0), ncol = 2)
Zt <- matrix(c(1, 0), ncol = 2)
ct <- matrix(0)
dt <- matrix(0, nrow = 2)
GGt <- matrix(0)
H <- matrix(c(1, ma1), nrow = 2) * sigma
HHt <- H %*% t(H)
a0 <- c(0, 0)
## Diffuse assumption
P0 <- matrix(1e6, nrow = 2, ncol = 2)
return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt = GGt,
HHt = HHt))}
Parameter estimation is performed through MLE, which involves optimizing the log-likelihood returned by the Kalman filter through the 'optim' function. Many other optimization procedures are available within R for more difficult optimization procedures (such as the package 'rgenoud'), such as when the log-likelihood is discontinuous, or multiple state variables are considered. Utilising both the Kalman filter and Kalman smoother to estimate parameters through expectation-maximization is another alternate approach.
# The objective function passed to 'optim'
objective <- function(theta, yt, SP) {
param <- arma21ss(theta["ar1"], theta["ar2"], theta["ma1"], theta["sigma"])
# Kalman filtering through the 'fkf.SP' function:
if(SP){
ans <- - fkf.SP(a0 = param$a0, P0 = param$P0, dt = param$dt, ct = param$ct,
Tt = param$Tt, Zt = param$Zt, HHt = param$HHt, GGt = param$GGt,
yt = yt, verbose = TRUE)$logLik
}
# Kalman filtering through the 'fkf' function:
else{
ans <- - fkf(a0 = param$a0, P0 = param$P0, dt = param$dt, ct = param$ct, Tt = param$Tt,
Zt = param$Zt, HHt = param$HHt, GGt = param$GGt, yt = yt)$logLik
}
return(ans)
}
##Optim minimizes functions by default, so the negative is returned
Step 3 - Estimate parameters through MLE:
#This test estimates parameters through 'optim'.
#Please run the complete chunk for a fair comparison:
#Initial values:
theta <- c(ar = c(0, 0), ma1 = 0, sigma = 1)
###MLE through the 'fkf' function:
start <- Sys.time()
set.seed(1)
FKF_estimation <- optim(theta, objective, yt = rbind(a), hessian = TRUE, SP = F)
FKF_runtime <- Sys.time() - start
###MLE through the 'fkf.SP' function:
start <- Sys.time()
set.seed(1)
FKF.SP_estimation <- optim(theta, objective, yt = rbind(a), hessian = TRUE, SP = T)
FKF.SP_runtime <- Sys.time() - start
The MLE process applying both functions has returned identical estimated parameters:
print(rbind(FKF.SP = FKF.SP_estimation$par, FKF = FKF_estimation$par))
#> ar1 ar2 ma1 sigma
#> FKF.SP 0.5534615 0.2276404 -0.1413417 0.4525427
#> FKF 0.5534615 0.2276404 -0.1413417 0.4525427
As well as an identical call count number for both functions:
print(c(FKF.SP = FKF.SP_estimation$counts[1], FKF = FKF_estimation$counts[1]))
#> FKF.SP.function FKF.function
#> 265 265
Utilizing Sequential Processing, however, we've decreased processing time:
print(c(FKF.SP = FKF.SP_runtime, FKF = FKF_runtime))
#> Time differences in secs
#> FKF.SP FKF
#> 1.019001 1.403578
Finally, under a variety of purposes, such as when parameters of the system have been estimated, it can be valuable to evaluate filtered state variables.
The filtered state variables and their filtered covariances are also identical between FKF and FKF.SP:
FKF.SP_parameters <- arma21ss(FKF.SP_estimation$par[1], FKF.SP_estimation$par[2], FKF.SP_estimation$par[3], FKF.SP_estimation$par[4])
FKF_parameters <- arma21ss(FKF_estimation$par[1], FKF_estimation$par[2], FKF_estimation$par[3], FKF_estimation$par[4])
FKF_output <- FKF::fkf(FKF_parameters$a0, FKF_parameters$P0, FKF_parameters$dt, FKF_parameters$ct, FKF_parameters$Tt, FKF_parameters$Zt, FKF_parameters$HHt, FKF_parameters$GGt, rbind(a))
FKF.SP_output <- FKF.SP::fkf.SP(FKF_parameters$a0, FKF_parameters$P0, FKF_parameters$dt, FKF_parameters$ct, FKF_parameters$Tt, FKF_parameters$Zt, FKF_parameters$HHt, FKF_parameters$GGt, rbind(a), verbose = TRUE)
print(head(t(rbind(
# FKF
FKF_output$att[1,],
# FKF.SP
FKF.SP_output$att[1,]
))))
#> [,1] [,2]
#> [1,] -0.10747402 -0.10747402
#> [2,] 0.03851773 0.03851773
#> [3,] -0.14022187 -0.14022187
#> [4,] -0.17502093 -0.17502093
#> [5,] 0.20129593 0.20129593
#> [6,] 0.27238242 0.27238242
Under the condition that observations (i.e., the measurement error) are independent, a condition that occurs under many cases, it is therefore significantly beneficial to adopt sequential processing.
This example presents differences in the computational time of the 'fkf.SP' and 'fkf' functions to the famous Nile dataset.
It also shows the difference in log-likelihood values returned by the two functions that occurs when NAs are within observations.
## Transition equation:
## alpha[t+1] = alpha[t] + eta[t], eta[t] ~ N(0, HHt)
## Measurement equation:
## y[t] = alpha[t] + eps[t], eps[t] ~ N(0, GGt)
##Complete Nile Data - no NA's
y_complete <- y_incomplete <- Nile
##Incomplete Nile Data - two NA's are present:
y_incomplete[c(3, 10)] <- NA
## Set constant parameters:
dt <- ct <- matrix(0)
Zt <- Tt <- matrix(1)
a0 <- y_incomplete[1] # Estimation of the first year flow
P0 <- matrix(100) # Variance of 'a0'
# 'P0' here is classified as a 'diffuse' initial state. A large estimate of the variance of the initial state variable is used when no prior information regarding state variance is known. This is again a common approach when performing Kalman filtering, and has been empirically shown to have little influence on estimated parameters, as future estimations are transient to the initial state. This is highly dependent, however, on the observations filtered, and caution should be advised.
## Parameter estimation - maximum likelihood estimation:
Nile_MLE <- function(yt, SP){
##Unknown parameters initial estimates:
GGt <- HHt <- var(yt, na.rm = TRUE) * .5
set.seed(1)
# Kalman filtering through the 'fkf.SP' function:
if(SP){
return(optim(c(HHt = HHt, GGt = GGt),
fn = function(par, ...)
-fkf.SP(HHt = matrix(par[1]), GGt = matrix(par[2]), ...)$logLik,
yt = rbind(yt), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt, verbose = TRUE))
} else {
# Kalman filtering through the 'fkf' function:
return(optim(c(HHt = HHt, GGt = GGt),
fn = function(par, ...)
-fkf(HHt = matrix(par[1]), GGt = matrix(par[2]), ...)$logLik,
yt = rbind(yt), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt))
}}
Performing parameter estimation using complete data, the fkf and fkf.SP functions return identical results:
fkf.SP_MLE_complete <- Nile_MLE(y_complete, SP = T)
fkf_MLE_complete <- Nile_MLE(y_complete, SP = F)
fkf.SP:
print(fkf.SP_MLE_complete[1:3])
#> $par
#> HHt GGt
#> 1300.777 15247.773
#>
#> $value
#> [1] 637.626
#>
#> $counts
#> function gradient
#> 57 NA
fkf:
print(fkf_MLE_complete[1:3])
#> $par
#> HHt GGt
#> 1300.777 15247.773
#>
#> $value
#> [1] 637.626
#>
#> $counts
#> function gradient
#> 57 NA
Performing parameter estimation using incomplete data returns identical estimated parameters, but different log-likelihood values:
fkf.SP_MLE_incomplete <- Nile_MLE(y_incomplete, SP = T)
fkf_MLE_incomplete <- Nile_MLE(y_incomplete, SP = F)
'fkf.SP':
print(fkf.SP_MLE_incomplete[1:3])
#> $par
#> HHt GGt
#> 1385.066 15124.131
#>
#> $value
#> [1] 625.1676
#>
#> $counts
#> function gradient
#> 53 NA
'fkf':
print(fkf_MLE_incomplete[1:3])
#> $par
#> HHt GGt
#> 1385.066 15124.131
#>
#> $value
#> [1] 627.0055
#>
#> $counts
#> function gradient
#> 53 NA
The difference in log-likelihood values is equal to 1.8378771. This difference is equal to:
#Number of NA values:
NA_values <- length(which(is.na(y_incomplete)))
print( 0.5 * NA_values * log(2 * pi))
#> [1] 1.837877
The log-likelihood score for the Kalman filter is given by:
\[ - \frac{1}{2}(n \times d \times log(2\pi)) - \frac{1}{2}\sum_{t=1}^{n}(log|F_t| + v'F^{-1}v)\] where \(n\) is the number of discrete time-steps (i.e. the number of columns of object 'yt') and \(d\) is the number of observations at each time point (i.e. the number of rows of object 'yt'). \(v\) and \(F_t\) are the measurement error and function of the covariance matrix at time \(t\) respectively. The 'fkf' function instantiates its log-likelihood score by calculating \(- 0.5 \times n \times d \times log(2\pi)\). Under the scenario where there are missing observations, however, \(d\) would instead become \(d_t\) where \(d_t \leq d \forall t\). The instantiated log-likelihood term would instead be \(- 0.5 ((n \times d)-2) \times log(2\pi)\), explaining this difference in log-likelihood scores. The 'fkf' function, in this case, therefore instantiates the log-likelihood score of two observations that are not actually observed.
#This test uses estimated parameters of complete data.
#Please run the complete chunk for a fair comparison:
#'fkf'
set.seed(1)
start <- Sys.time()
for(i in 1:1e4) fkf(a0, P0, dt, ct, Tt, Zt, HHt = matrix(fkf_MLE_complete$par[1]),
GGt = matrix(fkf_MLE_complete$par[2]), yt = rbind(y_complete))
FKF_runtime <- Sys.time() - start
#'fkf.SP'
set.seed(1)
start = Sys.time()
for(i in 1:1e4) fkf.SP(a0, P0, dt, ct, Tt, Zt, HHt = matrix(fkf.SP_MLE_complete$par[1]),
GGt = matrix(fkf.SP_MLE_complete$par[2]), yt = rbind(y_complete), verbose = TRUE)
fkf.SP_runtime <- Sys.time() - start
print(c(FKF.SP = fkf.SP_runtime, FKF = FKF_runtime))
#> Time differences in secs
#> FKF.SP FKF
#> 0.5973079 0.8233068
Utilizing Sequential Processing has decreased processing time.
#This test estimates parameters 10 times through 'optim'.
#Please run the complete chunk for a fair comparison:
## Transition equation:
## alpha[t+1] = alpha[t] + eta[t], eta[t] ~ N(0, HHt)
## Measurement equation:
## y[t] = alpha[t] + eps[t], eps[t] ~ N(0, GGt)
## tree-ring widths in dimensionless units
y <- treering
## Set constant parameters:
dt <- ct <- matrix(0)
Zt <- Tt <- matrix(1)
a0 <- y[1] # Estimation of the first width
P0 <- matrix(100) # Variance of 'a0'
##Time comparison - Estimate parameters 10 times:
###MLE through the 'fkf' function:
start = Sys.time()
set.seed(1)
for(i in 1:10) fit_fkf <- optim(c(HHt = var(y, na.rm = TRUE) * .5,
GGt = var(y, na.rm = TRUE) * .5),
fn = function(par, ...)
-fkf(HHt = array(par[1],c(1,1,1)), GGt = array(par[2],c(1,1,1)), ...)$logLik,
yt = rbind(y), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt)
run_time_FKF = Sys.time() - start
###MLE through the 'fkf.SP' function:
start = Sys.time()
set.seed(1)
for(i in 1:10) fit_fkf.SP <- optim(c(HHt = var(y, na.rm = TRUE) * .5,
GGt = var(y, na.rm = TRUE) * .5),
fn = function(par, ...)
-fkf.SP(HHt = array(par[1],c(1,1,1)), GGt = matrix(par[2]),verbose = TRUE, ...)$logLik,
yt = rbind(y), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt)
run_time_FKF.SP = Sys.time() - start
print(c(fkf.SP = run_time_FKF.SP, fkf = run_time_FKF))
#> Time differences in secs
#> fkf.SP fkf
#> 1.560650 1.910009
Utilizing Sequential Processing has decreased processing time.
Additionally - Identical filtered values are returned:
## Filter tree ring data with estimated parameters using 'fkf':
fkf.obj <- fkf(a0, P0, dt, ct, Tt, Zt, HHt = array(fit_fkf$par[1],c(1,1,1)),
GGt = array(fit_fkf$par[2],c(1,1,1)), yt = rbind(y))
## Filter tree ring data with estimated parameters using 'fkf.SP':
fkf.SP.obj <- fkf.SP(a0, P0, dt, ct, Tt, Zt, HHt = array(fit_fkf$par[1],c(1,1,1)),
GGt = matrix(fit_fkf$par[2]), yt = rbind(y), verbose = TRUE)
print(head(cbind(FKF = fkf.obj$Ptt[1,,], FKF.SP = fkf.SP.obj$Ptt[1,,])))
#> FKF FKF.SP
#> [1,] 0.08216834 0.08216834
#> [2,] 0.04122259 0.04122259
#> [3,] 0.02767374 0.02767374
#> [4,] 0.02097740 0.02097740
#> [5,] 0.01702170 0.01702170
#> [6,] 0.01443543 0.01443543
The Kalman filter can be used to fit stochastic models to time-series data of quoted prices of futures contracts of commodities. The following example estimates the parameters of a random walk (i.e. geometric Brownian motion) model for crude oil through MLE. Quoted futures contracts are available in the 'NFCP' package. See the 'NFCP' documentation for more details on fitting commodity pricing models to term structure data.
Step 1 - develop the objective function:
yt = t(log(NFCP::SS_oil$contracts)) # quoted log futures prices
delta_t <- NFCP::SS_oil$dt # Discrete time step
##time to maturity of quoted futures contracts:
TTM <- t(NFCP::SS_oil$contract_maturities)
a0 <- yt[1,1] # initial estimate
P0 <- matrix(100) # Variance of 'a0'
## GBM Function
gbm_mle <- function(theta, SP){
ct <- theta["alpha_rn"] * TTM
dt <- (theta["alpha"] - 0.5 * theta["sigma"]^2) * delta_t
Zt <- matrix(1, nrow(yt))
HHt <- matrix(theta["sigma"]^2 * delta_t)
Tt <- matrix(1)
##'fkf.SP' requires a vector of the diagonal elements of the variances of the measurement error
if(SP){
GGt = rep(theta["ME_1"]^2, nrow(yt))
} else {
##'fkf' instead requires a matrix of the elements of the variances of the measurement error
GGt = diag(theta["ME_1"]^2, nrow(yt))
}
##'fkf.SP' returns only the log-likelihood numeric value when Verbose = FALSE, whilst 'fkf' returns a list of filtered values
logLik = ifelse(SP,
- fkf.SP(a0 = a0, P0 = P0, dt = dt, ct = ct, Tt = Tt, Zt = Zt, HHt = HHt, GGt = GGt, yt = yt, verbose = TRUE)$logLik,
- fkf(a0 = a0, P0 = P0, dt = dt, ct = ct, Tt = Tt, Zt = Zt, HHt = HHt, GGt = GGt, yt = yt)$logLik
)
return(logLik)
}
Step 2 - Perform MLE:
#This test estimates parameters through 'optim'.
#Please run the complete chunk for a fair comparison:
#Initial estimates
gbm_par <- c(alpha = 0, alpha_rn = 0.01, sigma = 0.1, ME_1 = 0.05)
###MLE through the 'fkf.SP' function:
set.seed(1)
start = Sys.time()
fkf.SP.gbm = optim(par = gbm_par, fn = gbm_mle, SP = T)
fkf.SP_runtime <- Sys.time() - start
###MLE through the 'fkf' function:
set.seed(1)
start = Sys.time()
fkf.gbm = optim(par = gbm_par, fn = gbm_mle, SP = F)
fkf_runtime <- Sys.time() - start
The presence of a large number of NA's in the observation matrix (i.e. object 'yt') has resulted in significantly different MLE scores of both functions (see Example 3 for more details):
print(rbind(FKF.SP = - fkf.SP.gbm$value, FKF = - fkf.gbm$value))
#> [,1]
#> FKF.SP 10221.345
#> FKF -4778.489
Regardless, The MLE process applying both functions has returned nearly identical estimated parameters:
print(rbind(FKF.SP = fkf.SP.gbm$par, FKF = fkf.gbm$par))
#> alpha alpha_rn sigma ME_1
#> FKF.SP -0.02283278 0.001236720 0.2070780 0.03721549
#> FKF -0.02277886 0.001234892 0.2071917 0.03721757
As well as a nearly identical call count number for both functions:
print(c(FKF.SP = fkf.SP.gbm$counts[1], FKF = fkf.gbm$counts[1]))
#> FKF.SP.function FKF.function
#> 145 153
A sequential processing approach, however, has significantly decreased processing time:
print(c(FKF.SP = fkf.SP_runtime, FKF = fkf_runtime))
#> Time differences in secs
#> FKF.SP FKF
#> 0.2015769 3.7294381
Sequential processing is a significantly faster Kalman filtering approach for this particular example due to the large number of observations at each time point, the assumption that the variance of the disturbances are independent, the large number of NA's that are observed as contracts expired or are made available and the dimensionality of argument 'GGt' being significantly reduced.
Finally, the filtered values, which in this case correspond to the estimated log of the spot price, are identical through both functions:
ct <- fkf.SP.gbm$par['alpha_rn'] * TTM
dt <- (fkf.SP.gbm$par['alpha'] - 0.5 * fkf.SP.gbm$par['sigma']^2) * delta_t
Zt <- matrix(1, nrow(yt))
HHt <- matrix(fkf.SP.gbm$par['sigma']^2 * delta_t)
Tt <- matrix(1)
GGt.SP <- rep(fkf.SP.gbm$par['ME_1']^2, nrow(yt))
GGt <- diag(fkf.SP.gbm$par['ME_1']^2, nrow(yt))
GBM_fkf <- fkf(a0 = a0, P0 = P0, dt = dt, ct = ct, Tt = Tt, Zt = Zt, HHt = HHt, GGt = GGt, yt = yt)
GBM_fkf.SP <- fkf.SP(a0 = a0, P0 = P0, dt = dt, ct = ct, Tt = Tt, Zt = Zt, HHt = HHt, GGt = GGt.SP, yt = yt, verbose = TRUE)
Filtered_values <- t(rbind(FKF = GBM_fkf$att, FKF.SP = GBM_fkf.SP$att))
colnames(Filtered_values) <- c("FKF", "FKF.SP")
print(head(Filtered_values))
#> FKF FKF.SP
#> [1,] 3.032519 3.032519
#> [2,] 2.979634 2.979634
#> [3,] 2.970764 2.970764
#> [4,] 2.966605 2.966605
#> [5,] 3.003469 3.003469
#> [6,] 3.007449 3.007449
Kalman smoothing is an algorithm that allows one to refine estimates of previous states, conditional on the filtered observations. This is useful when performing expectation maximization (EM) approaches to parameter estimation. The 'FKF.SP' package allows for Kalman smoothing in two ways. The first way is to simply set the argument "smoothing = TRUE" when performing Kalman filtering through the 'fkf.SP' function. Alternatively, if a state space model has been filtered with the argument 'verbose = TRUE', then the 'fks.SP' function may be used on the returned object. The solution to Kalman smoothing through sequential processing is available in the documentation of the 'fks.SP' function.
The 'fks.SP' function takes an object of class 'fkf.SP', which is returned whenever the 'fkf.SP' function is called with the argument 'verbose = TRUE'.
This worked example compares the relative speeds of filtering, and then smoothing, the Nile data, between the 'FKF' and 'FKF.SP' packages.
#This test compares processing speeds of function calls using a set number of iterations.
#Please run the complete chunk for a fair comparison:
## Transition equation:
## alpha[t+1] = alpha[t] + eta[t], eta[t] ~ N(0, HHt)
## Measurement equation:
## y[t] = alpha[t] + eps[t], eps[t] ~ N(0, GGt)
##Nile Data:
yt <- Nile
## Set constant parameters:
dt <- ct <- matrix(0)
Zt <- Tt <- matrix(1)
a0 <- yt[1] # Estimation of the first year flow
P0 <- matrix(100) # Variance of 'a0'
# Parameter estimation - maximum likelihood estimation:
# Unknown parameters initial estimates:
GGt <- HHt <- var(yt, na.rm = TRUE) * .5
HHt = matrix(HHt)
GGt = matrix(GGt)
yt = rbind(yt)
# Run each function call 10,000 times to minimize variance:
N_iterations <- 1e4
## # Kalman filtering, and then smoothing, through each approach:
# FKF:
set.seed(1)
FKF_start_time <- Sys.time()
for(i in 1:N_iterations){
# Filtering:
FKF_Nile_filtered <- fkf(HHt = matrix(HHt), GGt = matrix(GGt), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt, yt = rbind(yt))
# Smoothing:
FKF_Nile_smoothed <- fks(FKF_Nile_filtered)
}
FKF_runtime <- difftime(Sys.time(), FKF_start_time, units = "secs")
# FKF.SP - Approach 1:
# Filtering, and then smoothing, using 'smoothing = TRUE':
set.seed(1)
FKF.SP_1_start_time <- Sys.time()
for(i in 1:N_iterations){
FKF.SP_1_Nile_filtered_smoothed <- fkf.SP(HHt = matrix(HHt), GGt = matrix(GGt), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt, yt = rbind(yt), smoothing = TRUE)
}
FKF.SP_1_runtime <- difftime(Sys.time(), FKF.SP_1_start_time, units = "secs")
# FKF.SP - Approach 2:
# Filtering a model using 'verbose = TRUE', and then smoothing using the returned object:
set.seed(1)
FKF.SP_2_start_time <- Sys.time()
for(i in 1:N_iterations){
# Filtering:
FKF.SP_Nile_filtered <- fkf.SP(HHt = matrix(HHt), GGt = matrix(GGt), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt, yt = rbind(yt), verbose = TRUE)
# Smoothing:
FKF.SP_2_Nile_smoothed <- fks.SP(FKF.SP_Nile_filtered)
}
FKF.SP_2_runtime <- difftime(Sys.time(), FKF.SP_2_start_time, units = "secs")
print(c("FKF.SP (Approach 1)" = FKF.SP_1_runtime, "FKF.SP (Approach 2)" = FKF.SP_2_runtime, "FKF"= FKF_runtime))
#> Time differences in secs
#> FKF.SP (Approach 1) FKF.SP (Approach 2) FKF
#> 0.9048891 0.8839259 1.2231350
Utilizing Sequential Processing has decreased processing time when both filtering and smoothing. Approach 1 of Kalman filtering, and then smoothing, through the FKF.SP package results in both the Kalman filter and Kalman smoother recursions being run in the same compiled C function call.
Finally, the smoothed values returned by the three functions are identical:
Smoothed_values <- t(rbind(FKF.SP_1_Nile_filtered_smoothed$ahatt, FKF.SP_2_Nile_smoothed$ahatt, FKF_Nile_smoothed$ahatt))
colnames(Smoothed_values) <- c("FKF.SP (Approach 1)","FKF.SP (Approach 2)", "FKF")
print(head(Smoothed_values))
#> FKF.SP (Approach 1) FKF.SP (Approach 2) FKF
#> [1,] 1119.985 1119.985 1119.985
#> [2,] 1117.839 1117.839 1117.839
#> [3,] 1073.533 1073.533 1073.533
#> [4,] 1139.758 1139.758 1139.758
#> [5,] 1135.743 1135.743 1135.743
#> [6,] 1107.470 1107.470 1107.470
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.