The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
This vignette is intended to show the wide variety of
lme4::lmer
models that can be handled by {equatiomatic}.
The output uses the notation from Gelman and Hill. If
you notice any errors in the notation, please file an
issue. Similarly, if you try to use {equatiomatic} with an lmer
model and end up with an error (either in notation or code) I would
really appreciate if you could file an issue with a
reproducible example.
This vignette displays many of the models that are covered by the package tests. It also illustrates many of the features of {equatiomatic}, including
A basic two-level unconditional model:
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\alpha_{j[i]}, \sigma^2 \right) \\ \alpha_{j} &\sim N \left(\mu_{\alpha_{j}}, \sigma^2_{\alpha_{j}} \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
And a model with multiple levels:
um_long3 <- lmer(score ~ 1 + (1 | sid) + (1 | school) + (1 | district),
data = sim_longitudinal
)
#> boundary (singular) fit: see help('isSingular')
extract_eq(um_long3)
\[ \begin{aligned} \operatorname{score}_{i} &\sim N \left(\alpha_{j[i],k[i],l[i]}, \sigma^2 \right) \\ \alpha_{j} &\sim N \left(\mu_{\alpha_{j}}, \sigma^2_{\alpha_{j}} \right) \text{, for sid j = 1,} \dots \text{,J} \\ \alpha_{k} &\sim N \left(\mu_{\alpha_{k}}, \sigma^2_{\alpha_{k}} \right) \text{, for school k = 1,} \dots \text{,K} \\ \alpha_{l} &\sim N \left(\mu_{\alpha_{l}}, \sigma^2_{\alpha_{l}} \right) \text{, for district l = 1,} \dots \text{,L} \end{aligned} \]
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\mu, \sigma^2 \right) \\ \mu &=\alpha_{j[i]} + \beta_{1}(\operatorname{female}) + \beta_{2}(\operatorname{ses}) + \beta_{3}(\operatorname{minority}) \\ \alpha_{j} &\sim N \left(\mu_{\alpha_{j}}, \sigma^2_{\alpha_{j}} \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
Note that in the above, the mean structure at level 1 is broken out
into a separate line. You can override this with the
mean_separate
argument.
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\alpha_{j[i]} + \beta_{1}(\operatorname{female}) + \beta_{2}(\operatorname{ses}) + \beta_{3}(\operatorname{minority}), \sigma^2 \right) \\ \alpha_{j} &\sim N \left(\mu_{\alpha_{j}}, \sigma^2_{\alpha_{j}} \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
Similarly, just like with standard lm
models, you can
specify wrapping, and how many terms per line
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\mu, \sigma^2 \right) \\ \mu &=\alpha_{j[i]} + \beta_{1}(\operatorname{female})\ + \\ &\quad \beta_{2}(\operatorname{ses}) + \beta_{3}(\operatorname{minority}) \\ \alpha_{j} &\sim N \left(\mu_{\alpha_{j}}, \sigma^2_{\alpha_{j}} \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
And one more example with multiple levels
lev1_long <- lmer(score ~ wave + (1 | sid) + (1 | school) + (1 | district),
data = sim_longitudinal
)
#> boundary (singular) fit: see help('isSingular')
extract_eq(lev1_long)
\[ \begin{aligned} \operatorname{score}_{i} &\sim N \left(\alpha_{j[i],k[i],l[i]} + \beta_{1}(\operatorname{wave}), \sigma^2 \right) \\ \alpha_{j} &\sim N \left(\mu_{\alpha_{j}}, \sigma^2_{\alpha_{j}} \right) \text{, for sid j = 1,} \dots \text{,J} \\ \alpha_{k} &\sim N \left(\mu_{\alpha_{k}}, \sigma^2_{\alpha_{k}} \right) \text{, for school k = 1,} \dots \text{,K} \\ \alpha_{l} &\sim N \left(\mu_{\alpha_{l}}, \sigma^2_{\alpha_{l}} \right) \text{, for district l = 1,} \dots \text{,L} \end{aligned} \]
This should generally work regardless of the complexity.
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\mu, \sigma^2 \right) \\ \mu &=\alpha_{j[i]} + \beta_{1}(\operatorname{female}) + \beta_{2}(\operatorname{ses}) + \beta_{3j[i]}(\operatorname{minority}) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{3j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{j}} \\ &\mu_{\beta_{3j}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{j}} & \rho_{\alpha_{j}\beta_{3j}} \\ \rho_{\beta_{3j}\alpha_{j}} & \sigma^2_{\beta_{3j}} \end{array} \right) \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
Notice that it correctly parses which variable is randomly varying here. We can also make it more complicated.
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\mu, \sigma^2 \right) \\ \mu &=\alpha_{j[i]} + \beta_{1j[i]}(\operatorname{female}) + \beta_{2j[i]}(\operatorname{ses}) + \beta_{3}(\operatorname{minority}) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \\ &\beta_{2j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{j}} \\ &\mu_{\beta_{1j}} \\ &\mu_{\beta_{2j}} \end{aligned} \end{array} \right) , \left( \begin{array}{ccc} \sigma^2_{\alpha_{j}} & \rho_{\alpha_{j}\beta_{1j}} & \rho_{\alpha_{j}\beta_{2j}} \\ \rho_{\beta_{1j}\alpha_{j}} & \sigma^2_{\beta_{1j}} & \rho_{\beta_{1j}\beta_{2j}} \\ \rho_{\beta_{2j}\alpha_{j}} & \rho_{\beta_{2j}\beta_{1j}} & \sigma^2_{\beta_{2j}} \end{array} \right) \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
And even really complicated. Note the model below gives a warning.
hsb3 <- lmer(
math ~ female * ses + minority +
(ses * female + minority | sch.id),
hsb
)
#> boundary (singular) fit: see help('isSingular')
extract_eq(hsb3)
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\mu, \sigma^2 \right) \\ \mu &=\alpha_{j[i]} + \beta_{1j[i]}(\operatorname{female}) + \beta_{2j[i]}(\operatorname{ses}) + \beta_{3j[i]}(\operatorname{minority}) + \beta_{4j[i]}(\operatorname{female} \times \operatorname{ses}) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \\ &\beta_{2j} \\ &\beta_{3j} \\ &\beta_{4j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{j}} \\ &\mu_{\beta_{1j}} \\ &\mu_{\beta_{2j}} \\ &\mu_{\beta_{3j}} \\ &\mu_{\beta_{4j}} \end{aligned} \end{array} \right) , \left( \begin{array}{ccccc} \sigma^2_{\alpha_{j}} & \rho_{\alpha_{j}\beta_{1j}} & \rho_{\alpha_{j}\beta_{2j}} & \rho_{\alpha_{j}\beta_{3j}} & \rho_{\alpha_{j}\beta_{4j}} \\ \rho_{\beta_{1j}\alpha_{j}} & \sigma^2_{\beta_{1j}} & \rho_{\beta_{1j}\beta_{2j}} & \rho_{\beta_{1j}\beta_{3j}} & \rho_{\beta_{1j}\beta_{4j}} \\ \rho_{\beta_{2j}\alpha_{j}} & \rho_{\beta_{2j}\beta_{1j}} & \sigma^2_{\beta_{2j}} & \rho_{\beta_{2j}\beta_{3j}} & \rho_{\beta_{2j}\beta_{4j}} \\ \rho_{\beta_{3j}\alpha_{j}} & \rho_{\beta_{3j}\beta_{1j}} & \rho_{\beta_{3j}\beta_{2j}} & \sigma^2_{\beta_{3j}} & \rho_{\beta_{3j}\beta_{4j}} \\ \rho_{\beta_{4j}\alpha_{j}} & \rho_{\beta_{4j}\beta_{1j}} & \rho_{\beta_{4j}\beta_{2j}} & \rho_{\beta_{4j}\beta_{3j}} & \sigma^2_{\beta_{4j}} \end{array} \right) \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
In the sim_longitudinal
data that comes with the
package, the only level 1 predictor is wave
. The
group
and treatement
variables are at the
student level (level 2) and prop_low
is at the school
level. Let’s also add a district level variable (just the average score
for each district).
# calculate district means
dist_mean <- tapply(
sim_longitudinal$score,
sim_longitudinal$district,
mean
)
# put them in a df to merge
dist_mean <- data.frame(
district = names(dist_mean),
dist_mean = dist_mean
)
# create a new df with dist_mean added
d <- merge(sim_longitudinal, dist_mean, by = "district")
Now we can fit a model that should have predictors at every level.
We’ll allow wave
to vary randomly at each level too.
group_preds_m1 <- lmer(score ~ wave + group + treatment + prop_low + dist_mean +
(wave | sid) + (wave | school) + (wave | district),
data = d
)
#> boundary (singular) fit: see help('isSingular')
extract_eq(group_preds_m1)
\[ \begin{aligned} \operatorname{score}_{i} &\sim N \left(\alpha_{j[i],k[i],l[i]} + \beta_{1j[i],k[i],l[i]}(\operatorname{wave}), \sigma^2 \right) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\gamma_{0}^{\alpha} + \gamma_{1}^{\alpha}(\operatorname{group}_{\operatorname{low}}) + \gamma_{2}^{\alpha}(\operatorname{group}_{\operatorname{medium}}) + \gamma_{3}^{\alpha}(\operatorname{treatment}_{\operatorname{1}}) \\ &\mu_{\beta_{1j}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{j}} & \rho_{\alpha_{j}\beta_{1j}} \\ \rho_{\beta_{1j}\alpha_{j}} & \sigma^2_{\beta_{1j}} \end{array} \right) \right) \text{, for sid j = 1,} \dots \text{,J} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{k} \\ &\beta_{1k} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\gamma_{0}^{\alpha} + \gamma_{1}^{\alpha}(\operatorname{prop\_low}) \\ &\mu_{\beta_{1k}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{k}} & \rho_{\alpha_{k}\beta_{1k}} \\ \rho_{\beta_{1k}\alpha_{k}} & \sigma^2_{\beta_{1k}} \end{array} \right) \right) \text{, for school k = 1,} \dots \text{,K} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{l} \\ &\beta_{1l} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\gamma_{0}^{\alpha} + \gamma_{1}^{\alpha}(\operatorname{dist\_mean}) \\ &\mu_{\beta_{1l}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{l}} & \rho_{\alpha_{l}\beta_{1l}} \\ \rho_{\beta_{1l}\alpha_{l}} & \sigma^2_{\beta_{1l}} \end{array} \right) \right) \text{, for district l = 1,} \dots \text{,L} \end{aligned} \]
Interactions with multilevel models can be tricky because they can be within or across levels, and the notation changes depending on whether the random effect for the lower level (in the interaction term) is specified as varying randomly within the given level. Luckily, {equatiomatic} handles all of this for you.
First, let’s look at a model with interactions only at level 1.
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\mu, \sigma^2 \right) \\ \mu &=\alpha_{j[i]} + \beta_{1}(\operatorname{minority}) + \beta_{2}(\operatorname{female}) + \beta_{3}(\operatorname{female} \times \operatorname{minority}) \\ \alpha_{j} &\sim N \left(\mu_{\alpha_{j}}, \sigma^2_{\alpha_{j}} \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
And now an interaction at only level 2
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\alpha_{j[i]}, \sigma^2 \right) \\ \alpha_{j} &\sim N \left(\gamma_{0}^{\alpha} + \gamma_{1}^{\alpha}(\operatorname{meanses}) + \gamma_{2}^{\alpha}(\operatorname{sector}) + \gamma_{3}^{\alpha}(\operatorname{meanses} \times \operatorname{sector}), \sigma^2_{\alpha_{j}} \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
But more complicated are cross level interactions. Here’s a quick example.
cl_long1 <- lmer(score ~ wave * treatment + (wave | sid) + (1 | school) +
(1 | district),
data = sim_longitudinal
)
extract_eq(cl_long1)
\[ \begin{aligned} \operatorname{score}_{i} &\sim N \left(\alpha_{j[i],k[i],l[i]} + \beta_{1j[i]}(\operatorname{wave}), \sigma^2 \right) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\gamma_{0}^{\alpha} + \gamma_{1}^{\alpha}(\operatorname{treatment}_{\operatorname{1}}) \\ &\gamma^{\beta_{1}}_{0} + \gamma^{\beta_{1}}_{1}(\operatorname{treatment}_{\operatorname{1}}) \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{j}} & \rho_{\alpha_{j}\beta_{1j}} \\ \rho_{\beta_{1j}\alpha_{j}} & \sigma^2_{\beta_{1j}} \end{array} \right) \right) \text{, for sid j = 1,} \dots \text{,J} \\ \alpha_{k} &\sim N \left(\mu_{\alpha_{k}}, \sigma^2_{\alpha_{k}} \right) \text{, for school k = 1,} \dots \text{,K} \\ \alpha_{l} &\sim N \left(\mu_{\alpha_{l}}, \sigma^2_{\alpha_{l}} \right) \text{, for district l = 1,} \dots \text{,L} \end{aligned} \]
Note that the treatement
variable is shown as a
predictor of the level 1 intercept and the level 1 slope. But if the
slope is not randomly varying at this level, then the notation has to
change.
cl_long2 <- lmer(score ~ wave * treatment + (1 | sid) + (1 | school) +
(1 | district),
data = sim_longitudinal
)
#> boundary (singular) fit: see help('isSingular')
extract_eq(cl_long2)
\[ \begin{aligned} \operatorname{score}_{i} &\sim N \left(\alpha_{j[i],k[i],l[i]} + \beta_{1}(\operatorname{wave}), \sigma^2 \right) \\ \alpha_{j} &\sim N \left(\gamma_{0}^{\alpha} + \gamma_{1}^{\alpha}(\operatorname{treatment}_{\operatorname{1}}) + \gamma_{2}^{\alpha}(\operatorname{treatment}_{\operatorname{1}} \times \operatorname{wave}), \sigma^2_{\alpha_{j}} \right) \text{, for sid j = 1,} \dots \text{,J} \\ \alpha_{k} &\sim N \left(\mu_{\alpha_{k}}, \sigma^2_{\alpha_{k}} \right) \text{, for school k = 1,} \dots \text{,K} \\ \alpha_{l} &\sim N \left(\mu_{\alpha_{l}}, \sigma^2_{\alpha_{l}} \right) \text{, for district l = 1,} \dots \text{,L} \end{aligned} \]
This works even for really complicated models, including three-way interactions that contain a cross-level interaction. For example
cl_long3 <- lmer(
score ~ wave * group * treatment + wave * prop_low * treatment +
(wave | sid) + (wave | school) +
(wave + treatment | district),
sim_longitudinal
)
#> boundary (singular) fit: see help('isSingular')
extract_eq(cl_long3)
\[ \begin{aligned} \operatorname{score}_{i} &\sim N \left(\alpha_{j[i],k[i],l[i]} + \beta_{1j[i],k[i],l[i]}(\operatorname{wave}), \sigma^2 \right) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\gamma_{0}^{\alpha} + \gamma_{1}^{\alpha}(\operatorname{group}_{\operatorname{low}}) + \gamma_{2}^{\alpha}(\operatorname{group}_{\operatorname{medium}}) + \gamma_{3l[i]}^{\alpha}(\operatorname{treatment}_{\operatorname{1}}) + \gamma_{4}^{\alpha}(\operatorname{group}_{\operatorname{low}} \times \operatorname{treatment}_{\operatorname{1}}) + \gamma_{5}^{\alpha}(\operatorname{group}_{\operatorname{medium}} \times \operatorname{treatment}_{\operatorname{1}}) \\ &\gamma^{\beta_{1}}_{0} + \gamma^{\beta_{1}}_{1}(\operatorname{group}_{\operatorname{low}}) + \gamma^{\beta_{1}}_{2}(\operatorname{group}_{\operatorname{medium}}) + \gamma^{\beta_{1}}_{3}(\operatorname{treatment}_{\operatorname{1}}) + \gamma^{\beta_{1}}_{4}(\operatorname{group}_{\operatorname{low}} \times \operatorname{treatment}_{\operatorname{1}}) + \gamma^{\beta_{1}}_{5}(\operatorname{group}_{\operatorname{medium}} \times \operatorname{treatment}_{\operatorname{1}}) \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{j}} & \rho_{\alpha_{j}\beta_{1j}} \\ \rho_{\beta_{1j}\alpha_{j}} & \sigma^2_{\beta_{1j}} \end{array} \right) \right) \text{, for sid j = 1,} \dots \text{,J} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{k} \\ &\beta_{1k} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\gamma_{0}^{\alpha} + \gamma_{1}^{\alpha}(\operatorname{prop\_low}) + \gamma_{2}^{\alpha}(\operatorname{prop\_low} \times \operatorname{treatment}_{\operatorname{1}}) \\ &\gamma^{\beta_{1}}_{0} + \gamma^{\beta_{1}}_{1}(\operatorname{prop\_low}) + \gamma^{\beta_{1}}_{1}(\operatorname{prop\_low} \times \operatorname{treatment}_{\operatorname{1}}) \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{k}} & \rho_{\alpha_{k}\beta_{1k}} \\ \rho_{\beta_{1k}\alpha_{k}} & \sigma^2_{\beta_{1k}} \end{array} \right) \right) \text{, for school k = 1,} \dots \text{,K} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{l} \\ &\beta_{1l} \\ &\gamma_{3l} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{l}} \\ &\mu_{\beta_{1l}} \\ &\mu_{\gamma_{3l}} \end{aligned} \end{array} \right) , \left( \begin{array}{ccc} \sigma^2_{\alpha_{l}} & \rho_{\alpha_{l}\beta_{1l}} & \rho_{\alpha_{l}\gamma_{3l}} \\ \rho_{\beta_{1l}\alpha_{l}} & \sigma^2_{\beta_{1l}} & \rho_{\beta_{1l}\gamma_{3l}} \\ \rho_{\gamma_{3l}\alpha_{l}} & \rho_{\gamma_{3l}\beta_{1l}} & \sigma^2_{\gamma_{3l}} \end{array} \right) \right) \text{, for district l = 1,} \dots \text{,L} \end{aligned} \]
Finally, there is some support for alternative variance-covariance specifications. For example, you may want to specify a model with only the variance terms estimated, and not the covariances.
hsb_varsonly <- lmer(math ~ minority * female + (minority * female || sch.id),
data = hsb
)
#> boundary (singular) fit: see help('isSingular')
extract_eq(hsb_varsonly)
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\mu, \sigma^2 \right) \\ \mu &=\alpha_{j[i]} + \beta_{1j[i]}(\operatorname{minority}) + \beta_{2j[i]}(\operatorname{female}) + \beta_{3j[i]}(\operatorname{female} \times \operatorname{minority}) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \\ &\beta_{2j} \\ &\beta_{3j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{j}} \\ &\mu_{\beta_{1j}} \\ &\mu_{\beta_{2j}} \\ &\mu_{\beta_{3j}} \end{aligned} \end{array} \right) , \left( \begin{array}{cccc} \sigma^2_{\alpha_{j}} & 0 & 0 & 0 \\ 0 & \sigma^2_{\beta_{1j}} & 0 & 0 \\ 0 & 0 & \sigma^2_{\beta_{2j}} & 0 \\ 0 & 0 & 0 & \sigma^2_{\beta_{3j}} \end{array} \right) \right) \text{, for sch.id j = 1,} \dots \text{,J} \end{aligned} \]
Or maybe you want to group by the same thing multiple times. In this exact model I don’t think this makes any sense, but there are cases where it can. Note that, again, this model produces a warning.
hsb_doublegroup <- lmer(math ~ minority * female +
(minority | sch.id) + (female | sch.id),
data = hsb
)
#> Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
#> Model failed to converge with max|grad| = 0.00320004 (tol = 0.002, component 1)
extract_eq(hsb_doublegroup)
\[ \begin{aligned} \operatorname{math}_{i} &\sim N \left(\mu, \sigma^2 \right) \\ \mu &=\alpha_{j[i],k[i]} + \beta_{1j[i]}(\operatorname{minority}) + \beta_{2k[i]}(\operatorname{female}) + \beta_{3}(\operatorname{female} \times \operatorname{minority}) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{j}} \\ &\mu_{\beta_{1j}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{j}} & \rho_{\alpha_{j}\beta_{1j}} \\ \rho_{\beta_{1j}\alpha_{j}} & \sigma^2_{\beta_{1j}} \end{array} \right) \right) \text{, for sch.id j = 1,} \dots \text{,J} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{k} \\ &\beta_{2k} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{k}} \\ &\mu_{\beta_{2k}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{k}} & \rho_{\alpha_{k}\beta_{2k}} \\ \rho_{\beta_{2k}\alpha_{k}} & \sigma^2_{\beta_{2k}} \end{array} \right) \right) \text{, for sch.id.1 k = 1,} \dots \text{,K} \end{aligned} \]
And finally, you can have a mix of different things and it should generally still work.
long_mixed_ranef <- lmer(
score ~ wave +
(wave || sid) + (wave | school) + (1 | school) + (wave || district),
sim_longitudinal
)
#> boundary (singular) fit: see help('isSingular')
extract_eq(long_mixed_ranef)
\[ \begin{aligned} \operatorname{score}_{i} &\sim N \left(\alpha_{j[i],k[i],l[i],m[i]} + \beta_{1j[i],k[i],m[i]}(\operatorname{wave}), \sigma^2 \right) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{j}} \\ &\mu_{\beta_{1j}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{j}} & 0 \\ 0 & \sigma^2_{\beta_{1j}} \end{array} \right) \right) \text{, for sid j = 1,} \dots \text{,J} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{k} \\ &\beta_{1k} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{k}} \\ &\mu_{\beta_{1k}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{k}} & \rho_{\alpha_{k}\beta_{1k}} \\ \rho_{\beta_{1k}\alpha_{k}} & \sigma^2_{\beta_{1k}} \end{array} \right) \right) \text{, for school k = 1,} \dots \text{,K} \\ \alpha_{l} &\sim N \left(\mu_{\alpha_{l}}, \sigma^2_{\alpha_{l}} \right) \text{, for school.1 l = 1,} \dots \text{,L} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{m} \\ &\beta_{1m} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{m}} \\ &\mu_{\beta_{1m}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{m}} & 0 \\ 0 & \sigma^2_{\beta_{1m}} \end{array} \right) \right) \text{, for district m = 1,} \dots \text{,M} \end{aligned} \]
With that said, this is the part of the code that I would consider the most “experimental” at this point, so please do reach out if you have issues or run into use cases that are not supported.
Because of the multilevel nature of the notation used by {equatiomatic}, tracking the estimated coefficients can at times be a little difficult. To help with this, we keep the Greek notation with the estimated coefficients as subscripts. For example, let’s extract the equation from our model with group-level predictors.
\[ \begin{aligned} \operatorname{\widehat{score}}_{i} &\sim N \left(-20.6_{\alpha_{j[i],k[i],l[i]}} + 0.17_{\beta_{1j[i],k[i],l[i]}}(\operatorname{wave}), \sigma^2 \right) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &-5.23_{\gamma_{1}^{\alpha}}(\operatorname{group}_{\operatorname{low}}) - 4.03_{\gamma_{2}^{\alpha}}(\operatorname{group}_{\operatorname{medium}}) - 1.97_{\gamma_{3}^{\alpha}}(\operatorname{treatment}_{\operatorname{1}}) \\ &0 \end{aligned} \end{array} \right) , \left( \begin{array}{cc} 9.29 & -0.2 \\ -0.2 & 0.29 \end{array} \right) \right) \text{, for sid j = 1,} \dots \text{,J} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{k} \\ &\beta_{1k} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &12.81_{\gamma_{1}^{\alpha}}(\operatorname{prop\_low}) \\ &0 \end{aligned} \end{array} \right) , \left( \begin{array}{cc} 2.84 & 1 \\ 1 & 0 \end{array} \right) \right) \text{, for school k = 1,} \dots \text{,K} \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{l} \\ &\beta_{1l} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &1.21_{\gamma_{1}^{\alpha}}(\operatorname{dist\_mean}) \\ &0 \end{aligned} \end{array} \right) , \left( \begin{array}{cc} 0 & -1 \\ -1 & 0 \end{array} \right) \right) \text{, for district l = 1,} \dots \text{,L} \end{aligned} \]
There are still a few things I’m planning to implement that I haven’t quite gotten around to yet. These include
lm
extraction, including intercept
, and
raw_tex
Additionally, the Greek notation as subscripts may not always be desirable, particularly if the model is fairly simple. Future developments will likely make this optional.
The range of models that you can fit with lme4::lmer
is
huge, but I hope this will handle a wide range of models. I also
recognize that some users may want a different notation for the models.
At this point, I’m not planning to implement other notations, but that
could change if there’s enough demand for it. I’m open to any/all
suggestions for how to improve the package, and would love pull
requests to help support the package, big or small.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.