dynamite

Introduction

Panel data is common in various fields such as social sciences. These data consists of multiple subjects, followed over several time points, and there are often many observations per individual at each time, for example family status and income of each individual at each time point of interest. Such data can be analyzed in various ways, depending on the research questions and the characteristics of the data such as the number of individuals and time points, and the assumed distribution of the response variables. Popular, somewhat overlapping modeling approaches include dynamic panel models, fixed effect models, cross-lagged panel models (CLPM), CLPM with fixed or random effects Mulder and Hamaker (2021). Parameter estimation is typically based on ordinary least squares, generalized method of moments or maximum likelihood using structural equation models.

There are several R (R Core Team 2021) packages in CRAN focusing on panel data analysis. plm (Croissant and Millo 2008) provides various estimation methods and tests for linear panel data model, while fixest (Bergé 2018) supports multiple fixed effects and different distributions of response variables (based on stats::family). panelr (Long 2020) contains tools for panel data manipulation and estimation methods for “within-between” models which combine fixed effect and random effect models. This is done by using lme4 (Bates et al. 2015), geepack (Halekoh, Højsgaard, and Yan 2006), and brms (Bürkner 2018) packages as a backend. Finally, the lavaan (Rosseel 2012) package provides methods for general structural equation modeling, and thus can be used to estimate various panel data models such as cross-lagged panel models with fixed or random intercepts.

In traditional panel data models such as the ones provided by the aforementioned packages, the number of time points considered is often assumed to be relatively small, say less than 10, while the number of individuals can be for example hundreds or thousands. Due to the small number of time points, the effects of predictors are typically assumed to be time-invariant (although some extensions to time-varying effects have emerged (Hayakawa and Hou 2019)). On the other hand, when the number of time points is moderate or large, say hundreds or thousands, it can be reasonable to assume that the dynamics of the system change over time. There are various methods to model time-varying effects in (generalized) linear models based on state space models (Harvey and Phillips 1982; Durbin and Koopman 2012), with various R implementations (J. Helske 2022; Knaus et al. 2021; Brodersen et al. 2014). The state space modeling approaches are highly flexible but often computationally demanding due to assumed latent processes for the regression coefficients and an analytically intractable marginal likelihood, so they are most useful in settings with only few subjects. Methods based on the varying coefficients models (Hastie and Tibshirani 1993; Eubank et al. 2004) can be computationally more convenient, especially when the varying coefficients are based on splines. Some of the R implementations of time-varying coefficient models include (Casas and Fernandez-Casal 2019; Dziak et al. 2021).

The dynamite package provides an alternative approach to panel data inference which avoids some of the limitations and drawbacks of the aforementioned methods. First, dynamite supports estimating effects which vary smoothly over time according to splines, with penalization based on random walk priors. This allows modeling for example effects of interventions which increase or decrease over time. Second, dynamite supports wide variety of distributions for the response variables such as categorical distribution commonly encountered in life-course research (S. Helske, Helske, and Eerola 2018). Third, estimation is done in a fully Bayesian fashion using Stan (Stan Development Team 2022b, 2022a), which leads to transparent and interpretable quantification of parameter and predictive uncertainty. Finally, with dynamite we can model arbitrary number of measurements per individual (i.e. multiple channels following (S. Helske and Helske 2019)). Jointly modeling all endogenous variables allows us to consider long-term effects of interventions which take into account the interdependency of several variables of the model, and allows realistic simulation of complex counterfactual scenarios.

The dynamic multivariate panel model

Consider an individual \(i\) with observations \(y_{i,t} = (y^1_{i,t}, \ldots, y^c_{i,t},\ldots, y^C_{i,t})\), \(t=1,\ldots,T\), \(i = 1,\ldots,N\) i.e., at each time point \(t\) we have \(C\) observations from \(N\) individuals, where \(C\) is the number of channels, i.e., different response variables that have been measured. Assume that each element of \(y_{i,t}\) can depend on the past observations \(y_{i,t-\ell}\), \(\ell=1,\ldots\) (where the set of past values can be different for each channel) and also on additional exogenous covariates \(x_{i,t}\). In addition, \(y^c_{i,t}\) can depend on the elements of current \(y_{i,t}\), assuming that this does not lead to cyclic dependencies; this translates to a condition that the channels can be ordered so that \(y^c_{i,t}\) depends only on \(y^1_{i,t}, \ldots, y^{c-1}_{i,t}\), exogenous covariates, and past observations. For simplicity of the presentation, we now assume that the observations of each channel only depend on the previous time points, i.e., \(\ell = 1\) for all channels. We denote the set of all model parameters by \(\theta\). Now, assuming that the elements of \(y_{i,t}\) are independent given \(y_{i, t-1}\), \(x_{i,t}\), and \(\theta\) we have \[ \begin{aligned} y_{i,t} &\sim p_t(y_{i,t} | y_{i,t-1},x_{i,t},\theta) = \prod_{c=1}^C p^c_t(y^c_{i,t} | y^1_{i,t-1}, \ldots, y^C_{i,t-1}, x_{i,t}, \theta). \end{aligned} \] Importantly, the parameters of the conditional distributions \(p^c\) are time-dependent, enabling us to consider the evolution of the dynamics of our system over time.

Given a suitable link function depending on our distributional assumptions, we define a linear predictor \(\eta^c_{i,t}\) for the conditional distribution \(p^c_t\) of each channel \(c\) with the following general form: \[ \begin{aligned} \eta^c_{i,t} &= \alpha^c_{t} + X^{\beta, c}_{i,t} \beta^c + X^{\delta, c}_{i,t} \delta^c_{t} + X^{\nu, c}_{i,t} \nu_i^c + \lambda^c_i \psi^c_t, \end{aligned} \] where \(\alpha_t\) is the (possibly time-varying) common intercept term, \(X^{\beta, c}_{i,t}\) defines the predictors corresponding to the vector of time-invariant coefficients \(\beta^c\), and similarly for the time-varying term \(X^{\delta, c}_{i,t} \delta^c_{t}\). The term \(X^{\nu, c}_{i,t} \nu^c_{t}\) corresponds to individual-level random effects, where \(\nu^1_{i},\ldots, \nu^C_{i}\) are assumed to follow zero-mean Gaussian distribution, either with a diagonal or a full covariance matrix. Note that the predictors in \(X^{\beta, c}_{i,t}\), \(X^{\delta, c}_{i,t}\), and \(X^{\nu, c}_{i,t}\) may contain exogenous covariates or past observations of the response channels (or transformations of either).The final term \(\lambda^c_i \psi^c_t\) is a product of latent individual loadings \(\lambda^c_i\) and a univariate latent dynamic factor \(\psi^c_t\).

For time-varying coefficients \(\delta\) (and similarly for time-varying \(\alpha\) and latent factor \(\psi\)), we use Bayesian P-splines (penalized B-splines) (Lang and Brezger 2004) where \[ \delta^c_{k,t} = B_t \omega_k^c, \quad k=1,\ldots,K, \] where \(K\) is the number of covariates, \(B_t\) is a vector of B-spline basis function values at time \(t\) and \(\omega_k^c\) is a vector of corresponding spline coefficients. In general, the number of B-splines \(D\) used for constructing the splines for the study period \(1,\ldots,T\) can be chosen freely, but the actual value is not too important (as long as \(D \leq T\) and larger than the degree of the spline, e.g., three in case of cubic splines) (Wood 2020). Therefore, we use the same \(D\) basis functions for all time-varying effects. To mitigate overfitting due to too large a value of \(D\), we define a random walk prior for \(\omega^c_k\) as \[ \omega^c_{k,1} \sim p(\omega^c_{k,1}), \quad \omega^c_{k,d} \sim N(\omega^c_{k,d-1}, (\tau^c_k)^2), \quad d=2, \ldots, D. \] with a user defined prior \(p(\omega^c_{k,1})\) on the first coefficient, which due to the structure of \(B_1\) corresponds to the prior on the \(\delta^c_{k,1}\). Here, the parameter \(\tau^c_k\) controls the smoothness of the spline curves. While the different time-varying coefficients are modeled as independent a priori, the latent factors \(\psi\) can be modeled as correlated via correlated spline coefficients \(\omega\).

Defining the model

The models in the dynamite package are defined by combining the channel-specific formulas defined via the function dynamiteformula for which a shorthand alias obs is available. The function obs takes two arguments: formula and family, which define how the response variable of the channel depends on the predictor variables in the standard R formula syntax and the family of the response variable, respectively. These channel specific definitions are then combined into a single model with +. For example, the following formula

obs(y ~ lag(x), family = "gaussian") + 
obs(x ~ z, family = "poisson")

defines a model with two channels; first we declare that y is a gaussian variable depending on the previous value of x (lag(x)) and then we add a second channel declaring x as Poisson distributed depending on some exogenous variable z (for which we do not define any distribution). Note that the model formula can be defined without any reference to some external data, just as an R formula can. Currently the dynamite package supports the following distributions for the observations:

Lagged responses and predictors

The dynamite models do not support contemporaneous dependencies in order to avoid complex cyclic dependencies which would make handling missing data, subsequent predictions, and causal inference challenging or impossible. In other words, response variables can only depend on the previous values of other response variables. This is why we used lag(x) in the previous example, a shorthand for lag(x, k = 1), which defines a one-step lag of the variable x. Longer lags can also be defined by adjusting the argument k.

A special model component lags can also be used to quickly add lagged responses as predictors. This component adds a lagged value of each response in the model as a predictor to every channel. For example, calling

obs(y ~ z, family = "gaussian") + 
obs(x ~ z, family = "poisson") +
lags(k = 1)

would add lag(y, k = 1) as a predictor of x and conversely, lag(x, k = 1) as a predictor of y. Therefore, the previous code would produce the exact same model as writing

obs(y ~ z + lag(x, k = 1), family = "gaussian") + 
obs(x ~ z + lag(y, k = 1), family = "poisson")

Just as with the function lag(), the argument k in lags can be adjusted to add longer lags of each response to each channel, but for lags it can also be a vector. The inclusion of lagged response variables in the model implies that some time points have to be considered fixed in the estimation. The number of fixed time points in the model is equal to the largest shift value of any observed response variable in the model (defined either via lag or the global lags).

Time-varying and time-invariant predictors

The formula within obs can also contain an additional special function varying, which defines the time-varying part of the model equation, in which case we could write for example obs(x ~ z + varying(~ -1 + w), family = "poisson"), which defines a model equation with a constant intercept and time-invariant effect of z, and a time-varying effect of w. We also remove the duplicate intercept with -1 in order to avoid identifiability issues in the model estimation (we could also define a time varying intercept, in which case we would write obs(x ~ -1 + z + varying(~ w), family = "poisson)). The part of the formula not wrapped with varying is assumed to correspond to the fixed part of the model, so obs(x ~ z + varying(~ -1 + w), family = "poisson") is actually identical to obs(x ~ -1 + fixed(~ z) + varying(~ -1 + w), family = "poisson") and obs(x ~ fixed(~ z) + varying(~ -1 + w), family = "poisson"). The use of fixed is therefore optional.

When defining varying effects, we also need to define how the these time-varying regression coefficient behave. For this, a splines component should be added to the model, e.g.  obs(x ~ varying(~ -1 + w), family = "poisson) + splines(df = 10) defines a cubic B-spline with 10 degrees of freedom for the time-varying coefficient corresponding to the w. If the model contains multiple time-varying coefficients, same spline basis is used for all coefficients, with unique spline coefficients and their standard deviation, as defined in the previous Section.

Group-level random effects

Random effect terms for each group can be defined using the special term random() within the formula argument of obs(), analogously to varying(). By default, all random effects within a group and across all channels are modeled as zero-mean multivariate normal. The optional model component random_spec() can be used to define non-correlated random effects as random_spec(correlated = FALSE). In addition, as with the spline coefficients, it is possible to switch between centered and non-centered (the default) parameterization of the random effects using the noncentered argument of random_spec. The random effects are not yet supported for categorical distribution due to technical reasons.

For example, the following code defines a Gaussian response variable x with a time-invariant common effect of z as well as a group-specific intercept and group-specific effect of z.

obs(x ~ z + random(~1 + z), family = "gaussian")

The variable that defines the groups in the data is provided in the call to the model fitting function dynamite().

Latent factors

Instead of common time-varying intercept terms, it is possible to define channel-specific univariate latent factors using the lfactor() model component. Each latent factor is modeled as a spline, with degrees of freedom and spline degree defined via the splines() component (in the case that the model also contains time-varying effects, the same spline basis definition is currently used for both latent factors and time-varying effects). The argument responses of lfactor defines which channels should contain a latent factor, while argument correlated determines whether the latent factors should modeled as correlated. Again, users can switch between centered and non-centered parameterizations using the arguments noncentered_lambda and noncentered_psi.

In general, dynamic latent factors are not identifiable without imposing some constraints on the factor loadings \(\lambda\) or the latent factor \(\psi\) (Bai and Wang 2015), especially in the context of DMPMs and dynamite. In the package these identifiability problems are addressed via internal reparameterization and an additional argument nonzero_lambda which determines whether we assume that the expected value of the factor loadings is zero or not. The theory and thorough experiments regarding the robustness of these identifiability constraints is a work in progress, so some caution should be used regarding the use of the lfactor() component.

Auxiliary channels

In addition to declaring response variables via obs, we can also use the function aux to define auxiliary channels which are deterministic functions of other variables. Defining these auxiliary variables explicitly instead of defining them implicitly on the right-hand side of the formulas (i.e., by using the “as is” function I()) makes the subsequent prediction steps more clear and allows easier checks on the model validity. In fact, we do not allow the use of I() within dynamiteformula. The values of auxiliary variables are computed dynamically during prediction, making the use of lagged values and other transformations possible. An example of a formula using an auxiliary channel could be

obs(y ~ lag(log1x), family = "gaussian") + 
obs(x ~ z, family = "poisson") +
aux(numeric(log1x) ~ log(1 + x) | init(0))

For auxiliary channels, the formula declaration ~ should be understood as a mathematical equality, where the right hand side provides the defining expression. Thus the example above defines a new auxiliary channel whose response is log1x defined as the logarithm of 1 + x and assigns it to be of type numeric. The type declaration is required, because it might not be possible to unambiguously determine the type of the response variable based on its expression alone from the data, especially for factor responses.

Auxiliary variables can be used directly in the formulas of other channels, just like any other variable. Note that an auxiliary channel can also depend on other response variables without lags, unlike observed channels. The function aux also does not use the family argument, which is automatically set to deterministic and is a special channel type of obs. Note that lagged values of deterministic aux channels do not imply fixed time points. Instead they must be given starting values using one of the two special functions, init or past after the main formula separated by the | symbol.

In the example above, because the channel for y contains a lagged value of log1x as a predictor, we also need to supply log1x with a single initial value that determines the value of the lag at the first time point. Here, init(0) defines the initial value of lag(log1x) to be zero. In general, if the model contains higher order lags of an auxiliary channel, then init can be supplied with a vector initializing each lag.

Note that init defines the same starting value to be used for all individuals. As an alternative, the function past can be used, which takes an R expression as its argument, and computes the starting value for each individual based on that expression. The expression is evaluated in the context of the data supplied to dynamite. For example, instead of init(0) in the example above, we could write:

obs(y ~ lag(log1x), family = "gaussian") + 
obs(x ~ z, family = "poisson") +
aux(numeric(log1x) ~ log(1 + x) | past(log(z)))

which defines that the value of lag(log1x) at the first time point is log(z) for each individual, using the value of z in the data to be supplied to compute the actual value of the expression. The function past can also be used if the model contains higher order lags of auxiliary responses. In this case, more observations from the variables bound by the expression given as the argument will simply be used.

Fitting the Model

The declared model formula is supplied to dynamite, which has the following arguments and default values:

dynamite(dformula, data, time, group = NULL, 
  priors = NULL, backend = "rstan",
  verbose = TRUE, verbose_stan = FALSE, debug = NULL, ...)

The first argument dformula is the dynamiteformula object that defines the model and the second argument data is a data.frame that contains the variables used in the model formula. The argument group is a column name of data that specified the unique groups (individuals) and similarly, time is a column name of data that specifies the unique time points. Note that group is optional: when group = NULL we assume that there is only a single group (or individual). The argument priors supplies optional user-defined priors for the model parameters. The backend argument accepts either "rstan" or "cmdstanr" and can be used to select the whether to use the rstan or cmdstanr package for the estimation, respectively. The arguments verbose and verbose_stan controls the verbosity of the output from dynamite and Stan, respectively. The debug argument can be used for various debugging options (see ?dynamite for further information on these parameters). Finally, ... supplies additional arguments to the backend sampling function (either rstan::sampling() or the sample() method of the CmdStanModel model object).

For example,

dynamite(
  dformula = obs(x ~ varying(~ -1 + w), family = "poisson") + splines(df = 10), 
  data = d, 
  time = "year", 
  group = "id"
  chains = 2, 
  cores = 2
)

estimates the model using the data in the data frame d, which contains variables year and id (defining the time-index and group-index variables of the data). Arguments chains and cores are passed to rstan::sampling which then uses to parallel Markov chains in the the Markov chain Monte Carlo (MCMC) sampling of the model parameters.

Example of Multichannel Model

As an example, consider the following simulated multichannel data:

library(dynamite)
head(multichannel_example)
#>   id time          g  p b
#> 1  1    1 -0.6264538  5 1
#> 2  1    2 -0.2660091 12 0
#> 3  1    3  0.4634939  9 1
#> 4  1    4  1.0451444 15 1
#> 5  1    5  1.7131026 10 1
#> 6  1    6  2.1382398  8 1

The data contains 50 unique groups (variable id), over 20 time points (time), a continuous variable g, a variable with non-negative integer values p, and a binary variable b. We define the following model (which actually matches the data generating process used in simulating the data):

set.seed(1)
f <- obs(g ~ lag(g) + lag(logp), family = "gaussian") +
  obs(p ~ lag(g) + lag(logp) + lag(b), family = "poisson") +
  obs(b ~ lag(b) * lag(logp) + lag(b) * lag(g), family = "bernoulli") +
  aux(numeric(logp) ~ log(p + 1))

As a directed acyclic graph (DAG), the model for the first three time points is A DAG of the multichannel model.

(The deterministic channel logp is omitted for clarity).

We fit the model using the dynamite function. In order to satisfy the package size requirements of CRAN, we have to use small number of iterations and additional thinning:

multichannel_example_fit <- dynamite(
  f, data = multichannel_example, 
  time = "time", group = "id"
  chains = 1, cores = 1, iter = 2000, warmup = 1000, init = 0, refresh = 0,
  thin = 5, save_warmup = FALSE)

You can update the estimated model which is included in the package as

multichannel_example_fit <- update(multichannel_example_fit,
  iter = 2000,
  warmup = 1000,
  chains = 4,
  cores = 4,
  refresh = 0,
  save_warmup = FALSE
)

Note that the dynamite call above produces a warning message related to the deterministic channel logp:

#> Warning: Deterministic channel `logp` has a maximum lag of 1 but you've supplied no
#> initial values:
#> ℹ This may result in NA values for `logp`.

This happens because the formulas of the other channels have lag(logp) as a predictor, whose value is undefined at the first time point without the initial value. However, in this case we can safely ignore the warning because the model contains lags of b and g as well meaning that the first time point in the model is fixed and does not enter into the model fitting process.

We can obtain posterior samples or summary statistics of the model using the as.data.frame, coef, and summary functions, but here we opt for visualizing the results using the plot_betas function:

plot_betas(multichannel_example_fit)

Note the naming of the model parameters; for example beta_b_g_lag1 corresponds to a fixed coefficient beta of lagged variable g of the channel b.

Assume now that we are interested in studying the causal effect of variable \(b_5\) on variable \(g_t\) at \(t = 6, \ldots, 20\). There is no direct effect from \(b_5\) to \(g_6\), but as \(g_t\) at affects \(b_{t+1}\) (and \(p_{t+1}\)), which in turn has an effect on all the variables at \(t+2\), we should see an indirect effect of \(b_5\) to \(g_t\) from time \(t=7\) onward.

For this task, we first create a new dataset where the values of our response variables after time 5 are set to NA:

d <- multichannel_example
d <- d |> 
  dplyr::mutate(
    g = ifelse(time > 5, NA, g),
    p = ifelse(time > 5, NA, p),
    b = ifelse(time > 5, NA, b))

We then predict time points 6 to 20 when \(b\) at time 5 is 0 or 1 for everyone:

newdata0 <- d |> 
  dplyr::mutate(
    b = ifelse(time == 5, 0, b))
pred0 <- predict(multichannel_example_fit, newdata = newdata0, type = "mean")
newdata1 <- d |> 
  dplyr::mutate(
    b = ifelse(time == 5, 1, b))
pred1 <- predict(multichannel_example_fit, newdata = newdata1, type = "mean")

By default, the output from predict is a single data frame containing the original new data and the samples from the posterior predictive distribution new observations. By defining type = "mean" we specify that we are interested in the posterior distribution of the expected values instead. In this case the the predicted values in a data frame are in the columns g_mean, p_mean, and b_mean where the NA values of newdata are replaced with the posterior samples from the model (the output also contains additional column corresponding to the auxiliary channel logp and index variable .draw). For more memory efficient format, by using argument expand = FALSE the output is returned as a list with two components, simulated and observed, with new samples and original newdata respectively.

head(pred0, n = 10) # first rows are NA as those were treated as fixed
#>      g_mean   p_mean    b_mean      logp id time .draw          g  p  b
#> 1        NA       NA        NA 1.7917595  1    1     1 -0.6264538  5  1
#> 2        NA       NA        NA 2.5649494  1    2     1 -0.2660091 12  0
#> 3        NA       NA        NA 2.3025851  1    3     1  0.4634939  9  1
#> 4        NA       NA        NA 2.7725887  1    4     1  1.0451444 15  1
#> 5        NA       NA        NA 2.3978953  1    5     1  1.7131026 10  0
#> 6  1.836878 3.658028 0.7359881 1.0986123  1    6     1         NA NA NA
#> 7  1.583641 4.057826 0.7063930 0.6931472  1    7     1         NA NA NA
#> 8  1.151348 3.476842 0.6524427 1.6094379  1    8     1         NA NA NA
#> 9  1.247886 2.469857 0.6889052 1.3862944  1    9     1         NA NA NA
#> 10 1.612244 5.014211 0.6963598 1.9459101  1   10     1         NA NA NA

We can now compute summary statistics over the individuals and then over the posterior samples to obtain posterior distribution of the causal effects as

sumr <- dplyr::bind_rows(
  list(b0 = pred0, b1 = pred1), .id = "case") |> 
  dplyr::group_by(case, .draw, time) |> 
  dplyr::summarise(mean_t = mean(g_mean)) |> 
  dplyr::group_by(case, time) |> 
  dplyr::summarise(
    mean = mean(mean_t),
    q5 = quantile(mean_t, 0.05, na.rm = TRUE), 
    q95 = quantile(mean_t, 0.95, na.rm = TRUE))
#> `summarise()` has grouped output by 'case', '.draw'. You can override using the
#> `.groups` argument.
#> `summarise()` has grouped output by 'case'. You can override using the
#> `.groups` argument.

It is also possible to do the marginalization over groups inside predict by using the argument funs which can be used to provide a named list of lists which contains functions to be applied for the corresponding channel. This approach can save considerably amount of memory in case of large number of groups. The names of the outermost list should be channel names. In our case we can write

pred0b <- predict(multichannel_example_fit, newdata = newdata0, type = "mean", 
  funs = list(g = list(mean_t = mean)))$simulated
pred1b <- predict(multichannel_example_fit, newdata = newdata1, type = "mean", 
  funs = list(g = list(mean_t = mean)))$simulated
sumrb <- dplyr::bind_rows(
  list(b0 = pred0b, b1 = pred1b), .id = "case") |> 
  dplyr::group_by(case, time) |> 
  dplyr::summarise(
    mean = mean(mean_t_g),
    q5 = quantile(mean_t_g, 0.05, na.rm = TRUE), 
    q95 = quantile(mean_t_g, 0.95, na.rm = TRUE))

The resulting tibble sumrb is equal to the previous sumr (apart from randomness due to simulation of new trajectories).

We can the visualize our predictions with the ggplot2 (Wickham 2016) as

library(ggplot2)
#> Warning: package 'ggplot2' was built under R version 4.2.2
ggplot(sumr, aes(time, mean)) + 
  geom_ribbon(aes(ymin = q5, ymax = q95), alpha = 0.5) +
  geom_line() + 
  scale_x_continuous(n.breaks = 10) + 
  facet_wrap(~ case) +
  theme_bw()
#> Warning: Removed 5 rows containing missing values (`geom_line()`).

Note that these estimates do indeed coincide with the causal effects (assuming of course that our model is correct), as we can find the backdoor adjustment formula (Pearl 1995) \[ E(g_t | do(b_5 = x)) = \sum_{g_5, p_5}E(g_t | b_5 = x, g_5, p_5) P(g_5, p_5), \] which is equal to mean_t in above codes.

We can also compute the difference \(P(g_t | do(b_5 = 1)) - P(g_t | do(b_5 = 0))\) as

sumr_diff <- dplyr::bind_rows(
  list(b0 = pred0, b1 = pred1), .id = "case") |> 
  dplyr::group_by(.draw, time) |>
  dplyr::summarise(
    mean_t = mean(g_mean[case == "b1"] - g_mean[case == "b0"])) |>
  dplyr::group_by(time) |>
  dplyr::summarise(
    mean = mean(mean_t),
    q5 = quantile(mean_t, 0.05, na.rm = TRUE),
    q95 = quantile(mean_t, 0.95, na.rm = TRUE))
#> `summarise()` has grouped output by '.draw'. You can override using the
#> `.groups` argument.

ggplot(sumr_diff, aes(time, mean)) +
  geom_ribbon(aes(ymin = q5, ymax = q95), alpha = 0.5) +
  geom_line() +
  scale_x_continuous(n.breaks = 10) +
  theme_bw()
#> Warning: Removed 5 rows containing missing values (`geom_line()`).

Which shows that there is a short term effect of \(b\) on \(g\), although the posterior uncertainty is quite large. Note that for multistep causal effects these estimates contain also Monte Carlo variation due to the simulation of the trajectories (e.g., there is also uncertainty in mean_t for fixed parameters given finite number of individuals we are marginalizing over).

References

Allison, Paul D., Richard Williams, and Enrique Moral-Benito. 2017. “Maximum Likelihood for Cross-Lagged Panel Models with Fixed Effects.” Socius 3: 2378023117710578. https://doi.org/10.1177/2378023117710578.
Arellano, Manuel, and Stephen Bond. 1991. “Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations.” The Review of Economic Studies 58 (2): 277–97. https://doi.org/10.2307/2297968.
Bai, Jushan, and Peng Wang. 2015. “Identification and Bayesian Estimation of Dynamic Factor Models.” Journal of Business & Economic Statistics 33 (2): 221–40.
Bates, Douglas, Martin Mächler, Ben Bolker, and Steve Walker. 2015. “Fitting Linear Mixed-Effects Models Using lme4.” Journal of Statistical Software 67 (1): 1–48. https://doi.org/10.18637/jss.v067.i01.
Bergé, Laurent. 2018. “Efficient Estimation of Maximum Likelihood Models with Multiple Fixed-Effects: The R Package FENmlm.” CREA Discussion Papers, no. 13.
Bollen, Kenneth A., and Jennie E. Brand. 2010. “A General Panel Model with Random and Fixed Effects: A Structural Equations Approach.” Social Forces 89 (1): 1–34. https://www.jstor.org/stable/40927552.
Brodersen, Kay H., Fabian Gallusser, Jim Koehler, Nicolas Remy, and Steven L. Scott. 2014. “Inferring Causal Impact Using Bayesian Structural Time-Series Models.” Annals of Applied Statistics 9: 247–74. https://research.google/pubs/pub41854/.
Bürkner, Paul-Christian. 2018. “Advanced Bayesian Multilevel Modeling with the R Package brms.” The R Journal 10 (1): 395–411. https://doi.org/10.32614/RJ-2018-017.
Casas, Isabel, and Ruben Fernandez-Casal. 2019. tvReg: Time-Varying Coefficient Linear Regression for Single and Multi-Equations in R.” {SSRN} {Scholarly} {Paper} ID 3363526. Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.3363526.
Croissant, Yves, and Giovanni Millo. 2008. “Panel Data Econometrics in R: The plm Package.” Journal of Statistical Software 27 (2): 1–43. https://doi.org/10.18637/jss.v027.i02.
Durbin, James, and Siem Jan Koopman. 2012. Time Series Analysis by State Space Methods. 2nd ed. New York: Oxford University Press.
Dziak, John J., Donna L. Coffman, Runze Li, Kaylee Litson, and Yajnaseni Chakraborti. 2021. Tvem: Time-Varying Effect Models. https://CRAN.R-project.org/package=tvem.
Eubank, R. L., Chunfeng Huang, Y. Muñoz Maldonado, Naisyin Wang, Suojin Wang, and R. J. Buchanan. 2004. “Smoothing Spline Estimation in Varying-Coefficient Models.” Journal of the Royal Statistical Society. Series B (Statistical Methodology) 66 (3): 653–67. https://www.jstor.org/stable/3647499.
Fixed Effects Regression Models. 2009. Thousand Oaks: SAGE Publications, Inc. https://doi.org/10.4135/9781412993869.
Halekoh, Ulrich, Søren Højsgaard, and Jun Yan. 2006. “The r Package Geepack for Generalized Estimating Equations.” Journal of Statistical Software 15/2: 1–11.
Hamaker, Ellen L, Rebecca M Kuiper, and Raoul PPP Grasman. 2015. “A Critique of the Cross-Lagged Panel Model.” Psychological Methods 20 (1): 102.
Harvey, A. C., and G. D. A. Phillips. 1982. “The Estimation of Regression Models with Time-Varying Parameters.” In Games, Economic Dynamics, and Time Series Analysis, edited by M. Deistler, E. Fürst, and G. Schwödiauer, 306–21. Heidelberg: Physica-Verlag HD.
Hastie, Trevor, and Robert Tibshirani. 1993. “Varying-Coefficient Models.” Journal of the Royal Statistical Society. Series B (Methodological) 55 (4): 757–96. https://www.jstor.org/stable/2345993.
Hayakawa, Kazuhiko, and Jie Hou. 2019. “Estimation of Time-Varying Coefficient Dynamic Panel Data Models.” Communications in Statistics - Theory and Methods 48 (13): 3311–24. https://doi.org/10.1080/03610926.2018.1476704.
Helske, Jouni. 2022. “Efficient Bayesian Generalized Linear Models with Time-Varying Coefficients: The Walker Package in r.” SoftwareX 18: 101016. https://doi.org/https://doi.org/10.1016/j.softx.2022.101016.
Helske, Satu, and Jouni Helske. 2019. “Mixture Hidden Markov Models for Sequence Data: The seqHMM Package in R.” Journal of Statistical Software 88 (3): 1–32. https://doi.org/10.18637/jss.v088.i03.
Helske, Satu, Jouni Helske, and Mervi Eerola. 2018. “Combining Sequence Analysis and Hidden Markov Models in the Analysis of Complex Life Sequence Data.” In Sequence Analysis and Related Approaches: Innovative Methods and Applications, edited by Gilbert Ritschard and Matthias Studer, 185–200. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-95420-2_11.
Knaus, Peter, Angela Bitto-Nemling, Annalisa Cadonna, and Sylvia Frühwirth-Schnatter. 2021. “Shrinkage in the Time-Varying Parameter Model Framework Using the R Package shrinkTVP.” Journal of Statistical Software 100 (November): 1–32. https://doi.org/10.18637/jss.v100.i13.
Lang, Stefan, and Andreas Brezger. 2004. “Bayesian P-Splines.” Journal of Computational and Graphical Statistics 13 (1): 183–212. https://doi.org/10.1198/1061860043010.
Long, Jacob A. 2020. Panelr: Regression Models and Utilities for Repeated Measures and Panel Data. https://cran.r-project.org/package=panelr.
Mulder, Jeroen D., and Ellen L. Hamaker. 2021. “Three Extensions of the Random Intercept Cross-Lagged Panel Model.” Structural Equation Modeling: A Multidisciplinary Journal 28 (4): 638–48. https://doi.org/10.1080/10705511.2020.1784738.
Pearl, Judea. 1995. “Causal Diagrams for Empirical Research.” Biometrika 82 (4): 669–88. https://doi.org/10.1093/biomet/82.4.669.
R Core Team. 2021. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
Rosseel, Yves. 2012. lavaan: An R Package for Structural Equation Modeling.” Journal of Statistical Software 48 (2): 1–36. https://doi.org/10.18637/jss.v048.i02.
Stan Development Team. 2022a. RStan: The R Interface to Stan.” https://mc-stan.org/.
———. 2022b. The Stan C++ Library. https://mc-stan.org/.
Wickham, Hadley. 2016. Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. https://ggplot2.tidyverse.org.
Wood, Simon N. 2020. “Inference and Computation with Generalized Additive Models and Their Extensions.” TEST 29 (2): 307–39. https://doi.org/10.1007/s11749-020-00711-5.