The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Introduction

Barr, Levy, Scheepers, & Tily (2013) suggest that for valid statistical inference, a regression model must control for all possible confounding factors, specifically those coming from random effects such as subjects and items. Bates, Kliegl, Vasishth, & Baayen (2015) suggest that this proposed strategy leads to overfitting and that an appropriately-parsimonious model must be chosen, preferably based on theory but possibly also using stepwise elimination (Matuschek, Kliegl, Vasishth, Baayen, & Bates, 2017). Both strategies require a maximal model to be identified (for Barr et al. (2013), this is the final model; for Matuschek et al. (2017), this is the basis for backward stepwise elimination), but for many psycholinguistic experiments, the truly maximal model will fail to converge and a reasonable subset model needs to be chosen.

The buildmer package aims to automate the procedures identifying the maximal model that can still converge & performing backward stepwise elimination based on a variety of criteria (change in log-likelihood, AIC, BIC, change in explained deviance). The package does not contain any model-fitting code, but functions as an administrative loop around other packages by simply building up a maximal formula object and passing it along. Currently, the package supports models that can be fitted by (g)lm, (g)lmer (package lme4), gls, lme (package nlme), bam, gam, gamm (package mgcv), gamm4 (package gamm4), glmmTMB (package glmmTMB), multinom (package nnet), lmertree and glmertree (package glmertree), mixed_model (package GLMMadaptive), clmm (package ordinal), and any other package if you provide your own wrapper functions.

A vowel study

To illustrate what buildmer can do for you, the package comes with a particularly pathological dataset called vowels. It looks like this:

library(buildmer)
head(vowels)
##   participant   word vowel neighborhood  timepoint       f1       f2 following information stress
## 1           1 Keulen   oey      36.1961 0.05118788 407.8202 1838.611      lOns    4.511475   TRUE
## 2           1 Keulen   oey      36.1961 0.11941466 416.2701 1635.436      lOns    4.511475   TRUE
## 3           1 Keulen   oey      36.1961 0.18764143 450.6488 1654.561      lOns    4.511475   TRUE
## 4           1 Keulen   oey      36.1961 0.25586821 449.7890 1645.075      lOns    4.511475   TRUE
## 5           1 Keulen   oey      36.1961 0.32409498 444.6863 1606.261      lOns    4.511475   TRUE
## 6           1 Keulen   oey      36.1961 0.39232176 438.5311 1607.581      lOns    4.511475   TRUE

This is a pilot study that I conducted when I was just starting my PhD, and attempted to analyze in probably the worst way possible. The research question was whether vowel diphthongization in the Dutch vowels // was affected by syllable structure, such that an // within the same syllable would block diphthongization but an // in the onset of the next syllable would permit it. In plain English, the question was whether these five vowels in Dutch were pronounced like the vowel in English ‘fear’, with the tongue held constant for the duration of the vowel, or like the vowel in English ‘fade’, which has an upward tongue movement towards the position of the vowel in English ‘fit’. The position of the tongue can be measured in a simple word-list reading experiment by measuring the speech signal’s so-called ‘first formant’, labeled f1 in this dataset, where lower F1 = higher tongue. Thus, the research question is if the F1 either changes or remains stable for the duration of each vowel depending on whether the following consonant is an ‘l’ in the same syllable (coded as lCda in column following) or in the next syllable (coded as lOns). Additionally, I wanted to control for the factors neighborhood (a measure of entropy: ‘if only one sound is changed anywhere in this word, how many new words could be generated?’), information (another measure of entropy derived from the famous Shannon information measure), and stress (a dummy encoding whether the vowel was stressed or unstressed).

An entirely reasonable way to analyze these data, and the approach I ultimately pursued later in my PhD, would be to take samples from each vowel at 75% realization and at 25% realization, subtract these two, and use this ‘delta score’ as dependent variable: if this score is non-zero, the vowel changes over time, if it is approximately zero, the vowel was stable. In this dataset, however, I instead took as many samples as were present in the part of the wave file corresponding to these vowels, and wanted to fit a linear regression line through all of these samples as a function of the sample number. This number, scaled from 0 to 1 per token, is listed in column timepoint. To make the model even more challenging to fit, only six participants were tested in this pilot study, making it very difficult to find an optimum when including a full random-slope structure.

In lme4 syntax, the fully maximal model would be given by the following formula:

f <- f1 ~ vowel*timepoint*following * neighborhood*information*stress + 
     (vowel*timepoint*following * neighborhood+information+stress | participant) +
     (timepoint | word)

It should go without saying that this is a completely unreasonable model that will never converge. A first step towards reducing the model structure could be to reason that effects of neighborhood, information, and stress, which are all properties of the individual words in this dataset, could be subsumed into the random effects by words. This reduces the maximal model to:

f <- f1 ~ vowel*timepoint*following +
     (vowel*timepoint*following | participant) +
     (timepoint | word)

This model is still somewhat on the large side, so we will now use buildmer to check: - if this model is capable of converging at all; - if all of these terms are really necessary.

Finding the maximal feasible model & doing stepwise elimination from it

To illustrate buildmer’s modular capabilities, we’ll fit this model in two steps. We start by identifying the maximal model that is still capable of converging. We do this by running buildmer, including an optional buildmerControl argument in which we set the direction parameter to 'order'. We also set lme4s optimizer to bobyqa, as this manages to get much further than the default nloptwrap. Note how control parameters intended for lmer are specified in the args list inside buildmerControl. For backward-compatibility reasons, they can also be passed to buildmer itself, but this is deprecated now.

library(lme4)
m <- buildmer(f,data=vowels,buildmerControl=buildmerControl(direction='order',
          args=list(control=lmerControl(optimizer='bobyqa'))))
## Determining predictor order
## Fitting via lm: f1 ~ 1
## Currently evaluating LRT for: following, timepoint, vowel
## Fitting via lm: f1 ~ 1 + following
## Fitting via lm: f1 ~ 1 + timepoint
## Fitting via lm: f1 ~ 1 + vowel
## Updating formula: f1 ~ 1 + vowel
## Currently evaluating LRT for: following, timepoint
## Fitting via lm: f1 ~ 1 + vowel + following
## Fitting via lm: f1 ~ 1 + vowel + timepoint
## Updating formula: f1 ~ 1 + vowel + timepoint
## Currently evaluating LRT for: following, vowel:timepoint
## Fitting via lm: f1 ~ 1 + vowel + timepoint + following
## Fitting via lm: f1 ~ 1 + vowel + timepoint + vowel:timepoint
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint
## Currently evaluating LRT for: following
## Fitting via lm: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following
## Currently evaluating LRT for: timepoint:following, vowel:following
## Fitting via lm: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following
## Fitting via lm: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + vowel:following
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following
## Currently evaluating LRT for: vowel:following
## Fitting via lm: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following
## Currently evaluating LRT for: vowel:timepoint:following
## Fitting via lm: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following + vowel:timepoint:following
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following + vowel:timepoint:following
## Fitting via gam, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following
## Currently evaluating LRT for: 1 | participant, 1 | word
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 | participant)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 | word)
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following + vowel:timepoint:following + (1 | participant)
## Currently evaluating LRT for: following | participant, timepoint | participant, vowel |
##     participant, 1 | word
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + following |
##     participant)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + timepoint |
##     participant)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + vowel | participant)
## boundary (singular) fit: see ?isSingular
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 | participant) + (1 |
##     word)
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following + vowel:timepoint:following + (1 | participant) + (1 | word)
## Currently evaluating LRT for: following | participant, timepoint | participant, vowel |
##     participant, timepoint | word
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + following |
##     participant) + (1 | word)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + timepoint |
##     participant) + (1 | word)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + vowel | participant)
##     + (1 | word)
## boundary (singular) fit: see ?isSingular
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 | participant) + (1 +
##     timepoint | word)
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following + vowel:timepoint:following + (1 | participant) + (1 + timepoint | word)
## Currently evaluating LRT for: following | participant, timepoint | participant, vowel |
##     participant
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + following |
##     participant) + (1 + timepoint | word)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + timepoint |
##     participant) + (1 + timepoint | word)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + vowel | participant)
##     + (1 + timepoint | word)
## boundary (singular) fit: see ?isSingular
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following + vowel:timepoint:following + (1 + timepoint | participant) + (1 + timepoint |
##     word)
## Currently evaluating LRT for: following | participant, vowel | participant
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + timepoint + following
##     | participant) + (1 + timepoint | word)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + timepoint + vowel |
##     participant) + (1 + timepoint | word)
## boundary (singular) fit: see ?isSingular
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following + vowel:timepoint:following + (1 + timepoint + following | participant) + (1 +
##     timepoint | word)
## Currently evaluating LRT for: timepoint:following | participant, vowel | participant
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + timepoint + following
##     + timepoint:following | participant) + (1 + timepoint | word)
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + timepoint + following
##     + vowel | participant) + (1 + timepoint | word)
## boundary (singular) fit: see ?isSingular
## Updating formula: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following + timepoint:following +
##     vowel:following + vowel:timepoint:following + (1 + timepoint + following + timepoint:following
##     | participant) + (1 + timepoint | word)
## Currently evaluating LRT for: vowel | participant
## Fitting via lmer, with REML: f1 ~ 1 + vowel + timepoint + vowel:timepoint + following +
##     timepoint:following + vowel:following + vowel:timepoint:following + (1 + timepoint + following
##     + timepoint:following + vowel | participant) + (1 + timepoint | word)
## boundary (singular) fit: see ?isSingular
## Ending the ordering procedure due to having reached the maximal feasible model - all higher models
##     failed to converge. The types of convergence failure are: Singular fit
## Finalizing by converting the model to lmerTest

The order step is useful if the maximal model includes random effects: buildmer will start out with an empty model and keeps adding terms to this model until convergence can no longer be achieved. The order step adds terms in order of their contribution to a certain criterion, such that the most important random slopes will be included first; this criterion is controlled by the crit argument. The default criterion is the significance of the change in log-likelihood (LRT: terms which provide lower chi-square \(p\) values are considered more important), but other options are also supported. These are the raw log-likelihood (LL: terms which provide the largest increase in the log-likelihood; this measure will favor categorical predictors with many levels), AIC (AIC), BIC (BIC), explained deviance (deviance), and for GAMMs fitted using package mgcv the change in R-squared (F). You can select among them by passing e.g. crit='LRT' within the buildmerControl argument. The default direction is c('order','backward'), i.e. proceeding directly to backward stepwise elimination, but for illustration purposes we separate those steps here. (The crit argument also accepts vectors, such that e.g. direction=c('order','backward'),crit=c('LL','LRT') is allowed.)

After a lot of model fits, the model converges onto the following maximal model:

(f <- formula(m@model))
## f1 ~ following + vowel + timepoint + vowel:timepoint + following:timepoint + 
##     following:vowel + following:vowel:timepoint + (1 + timepoint + 
##     following + timepoint:following | participant) + (1 + timepoint | 
##     word)

The maximal feasible model, i.e. the maximal model that is actually capable of converging, is one excluding random slopes for vowels by participants. This is not optimal for inference purposes, but for now it will do; we will see below that taking out the correlation parameters in the random effects makes it possible to include random slopes for vowels as well. We now proceed to the next step: stepwise elimination. This could also be done using e.g. lmerTest, but since the machinery was needed for direction='order' anyway it came at very little cost to also implement stepwise elimination in buildmer (both forward and backward are supported). This uses the same elimination criterion as could be specified previously; if left unspecified, it defaults to crit='LRT', for the likelihood-ratio test. This is the preferred test for mixed models in Matuschek et al. (2017).

m <- buildmer(f,data=vowels,buildmerControl=list(direction='backward',
          args=list(control=lmerControl(optimizer='bobyqa'))))
## Fitting ML and REML reference models
## Fitting via lmer, with REML: f1 ~ following + vowel + timepoint + vowel:timepoint +
##     following:timepoint + following:vowel + following:vowel:timepoint + (1 + timepoint + following
##     + timepoint:following | participant) + (1 + timepoint | word)
## 
## Fitting via lmer, with REML: f1 ~ following + vowel + timepoint + vowel:timepoint +
##     following:timepoint + following:vowel + following:vowel:timepoint + (1 + timepoint + following
##     + timepoint:following | participant) + (1 + timepoint | word)
## Testing terms
## Fitting via lmer, with ML: f1 ~ 1 + following + vowel + timepoint + vowel:timepoint +
##     following:timepoint + following:vowel + (1 + timepoint + following + timepoint:following |
##     participant) + (1 + timepoint | word)
## Fitting via lmer, with REML: f1 ~ 1 + following + vowel + timepoint + vowel:timepoint +
##     following:timepoint + following:vowel + following:vowel:timepoint + (1 + timepoint + following
##     | participant) + (1 + timepoint | word)
## Fitting via lmer, with REML: f1 ~ 1 + following + vowel + timepoint + vowel:timepoint +
##     following:timepoint + following:vowel + following:vowel:timepoint + (1 + timepoint + following
##     + timepoint:following | participant) + (1 | word)
##       grouping                      term                              block Iteration           LRT
## 1         <NA>                         1                            NA NA 1         1            NA
## 2         <NA>                 following                    NA NA following         1            NA
## 3         <NA>                     vowel                        NA NA vowel         1            NA
## 4         <NA>                 timepoint                    NA NA timepoint         1            NA
## 5         <NA>           vowel:timepoint              NA NA vowel:timepoint         1            NA
## 6         <NA>       following:timepoint          NA NA following:timepoint         1            NA
## 7         <NA>           following:vowel              NA NA following:vowel         1            NA
## 8         <NA> following:vowel:timepoint    NA NA following:vowel:timepoint         1  3.609316e-30
## 9  participant                         1                   NA participant 1         1            NA
## 10 participant                 timepoint           NA participant timepoint         1            NA
## 11 participant                 following           NA participant following         1            NA
## 12 participant       timepoint:following NA participant timepoint:following         1  1.013211e-10
## 13        word                         1                          NA word 1         1            NA
## 14        word                 timepoint                  NA word timepoint         1 2.198802e-153
## All terms are significant
## Finalizing by converting the model to lmerTest

It appears that in this example, all terms were significant in backward-stepwise elimination.

By default, buildmer automatically calculates summary and ANOVA statistics based on Wald \(z\)-scores (summary) or Wald \(\chi^2\) tests (ANOVA). For answering our research question, we look at the summary:

summary(m)
## Linear mixed model fit by REML
## (p-values based on Wald z-scores) ['lmerMod']
## Formula: f1 ~ following + vowel + timepoint + vowel:timepoint + following:timepoint +  
##     (1 + timepoint | word) + (1 + timepoint + following + timepoint:following |      participant)
##    Data: vowels
## Control: structure(list(optimizer = "bobyqa", restart_edge = TRUE, boundary.tol = 1e-05,  
##     calc.derivs = TRUE, use.last.params = FALSE, checkControl = list( 
##         check.nobs.vs.rankZ = "ignore", check.nobs.vs.nlev = "stop",  
##         check.nlev.gtreq.5 = "ignore", check.nlev.gtr.1 = "stop",  
##         check.nobs.vs.nRE = "stop", check.rankX = "message+drop.cols",  
##         check.scaleX = "warning", check.formula.LHS = "stop"),  
##     checkConv = list(check.conv.grad = list(action = "warning",  
##         tol = 0.002, relTol = NULL), check.conv.singular = list( 
##         action = "message", tol = 1e-04), check.conv.hess = list( 
##         action = "warning", tol = 1e-06)), optCtrl = list()), class = c("lmerControl",  "merControl"))
## 
## REML criterion at convergence: 149448.9
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -6.2347 -0.4168  0.0102  0.3877 21.6815 
## 
## Random effects:
##  Groups      Name                    Variance Std.Dev. Corr             
##  word        (Intercept)              2539.3   50.39                    
##              timepoint               11089.5  105.31   -0.78            
##  participant (Intercept)              2199.7   46.90                    
##              timepoint                3423.5   58.51   -0.50            
##              followinglOns             747.3   27.34    0.11  0.65      
##              timepoint:followinglOns  3053.2   55.26    0.64 -0.88 -0.67
##  Residual                            10018.1  100.09                    
## Number of obs: 12351, groups:  word, 148; participant, 6
## 
## Fixed effects:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)              551.733     20.075  27.484  < 2e-16 ***
## followinglOns             49.106     14.504   3.386  0.00071 ***
## vowel1                   114.254      8.951  12.765  < 2e-16 ***
## vowel2                   142.083      8.910  15.946  < 2e-16 ***
## vowel3                  -125.955      8.626 -14.601  < 2e-16 ***
## vowel4                   -79.211      9.970  -7.945 1.94e-15 ***
## timepoint                -15.445     26.856  -0.575  0.56524    
## vowel1:timepoint         -39.663     18.239  -2.175  0.02966 *  
## vowel2:timepoint         -91.032     18.171  -5.010 5.45e-07 ***
## vowel3:timepoint         105.886     17.554   6.032 1.62e-09 ***
## vowel4:timepoint          38.924     20.271   1.920  0.05483 .  
## followinglOns:timepoint -136.888     29.411  -4.654 3.25e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) fllwnO vowel1 vowel2 vowel3 vowel4 timpnt vwl1:t vwl2:t vwl3:t vwl4:t
## follwnglOns -0.043                                                                      
## vowel1      -0.014  0.020                                                               
## vowel2      -0.017  0.025 -0.225                                                        
## vowel3      -0.029  0.031 -0.210 -0.207                                                 
## vowel4       0.034 -0.031 -0.278 -0.276 -0.266                                          
## timepoint   -0.533  0.596  0.017  0.020  0.034 -0.041                                   
## vowl1:tmpnt  0.011 -0.016 -0.787  0.177  0.165  0.218 -0.021                            
## vowl2:tmpnt  0.013 -0.020  0.177 -0.787  0.163  0.217 -0.025 -0.226                     
## vowl3:tmpnt  0.023 -0.024  0.166  0.164 -0.788  0.210 -0.044 -0.210 -0.208              
## vowl4:tmpnt -0.027  0.024  0.219  0.218  0.210 -0.788  0.051 -0.277 -0.276 -0.265       
## fllwnglOns:  0.567 -0.720 -0.016 -0.020 -0.024  0.024 -0.788  0.020  0.024  0.031 -0.030

The significant effect for followinglOns:timepoint shows that if the following /l/ is in the onset of the next syllable, there is a much larger change in F1 compared to the reference condition of having the following /l/ in the coda of the same syllable.

Diagonal random-effect covariances

One hidden feature that is present in buildmer but that has not yet been discussed is the ability to group terms together in blocks for ordering and stepwise-elimination purposes. While the first argument to buildmer functions is normally a formula, it is also possible to pass a ‘buildmer terms list’. This is a data frame as generated by tabulate.formula:

tabulate.formula(f)
##    index    grouping                      term                                code
## 1   <NA>        <NA>                         1                                   1
## 2   <NA>        <NA>                 following                           following
## 3   <NA>        <NA>                     vowel                               vowel
## 4   <NA>        <NA>                 timepoint                           timepoint
## 5   <NA>        <NA>           vowel:timepoint                     vowel:timepoint
## 6   <NA>        <NA>       following:timepoint                 following:timepoint
## 7   <NA>        <NA>           following:vowel                     following:vowel
## 8   <NA>        <NA> following:vowel:timepoint           following:vowel:timepoint
## 9    9 1 participant                         1                   9 1 participant 1
## 10   9 1 participant                 timepoint           9 1 participant timepoint
## 11   9 1 participant                 following           9 1 participant following
## 12   9 1 participant       timepoint:following 9 1 participant timepoint:following
## 13  10 1        word                         1                         10 1 word 1
## 14  10 1        word                 timepoint                 10 1 word timepoint
##                                 block
## 1                             NA NA 1
## 2                     NA NA following
## 3                         NA NA vowel
## 4                     NA NA timepoint
## 5               NA NA vowel:timepoint
## 6           NA NA following:timepoint
## 7               NA NA following:vowel
## 8     NA NA following:vowel:timepoint
## 9                    NA participant 1
## 10           NA participant timepoint
## 11           NA participant following
## 12 NA participant timepoint:following
## 13                          NA word 1
## 14                  NA word timepoint

This is an internal buildmer data structure, but it is rather self-explanatory in how it is used. It is possible to modify the block column to force terms to be evaluated as a single group, rather than separately, by giving these terms the same block value. These values are not used in any other way than this purpose of selecting terms to be grouped together, which can be exploited to fit models with diagonal random-effect structures. The first step is to create explicit columns for the factor vowel; if this is not done, only random-effect correlations between vowels and other random slopes will be eliminated and those between the vowels themselves will remain.

vowels <- cbind(vowels,model.matrix(~vowel,vowels))

We next create a formula for this modified dataset. To make it easier to type, we do not explicitly diagonalize the formula ourselves, but use buildmer’s diag() method for formula objects. We then call tabulate.formula() on the new formula, providing a regular expression that matches terms belonging to the same vowel. Note that we cannot use the simple vowel factor in the fixed-effects part of the formula, as this will break buildmer’s marginality checks when considering which terms are eligible for inclusion or removal.

form <- diag(f1 ~ (vowel1+vowel2+vowel3+vowel4)*timepoint*following + 
         ((vowel1+vowel2+vowel3+vowel4)*timepoint*following | participant) +
         (timepoint | word))
terms <- tabulate.formula(form,group='vowel[^:]')

Finally, we can instruct buildmer to use this specially-crafted terms object by simply passing it along instead of a regular formula. buildmer will recognize what is going on, and use the variable name specified in the dep control argument as the dependent variable in the data frame; this variable name should be provided as a character string.

m <- buildmer(terms,data=vowels,buildmerControl=buildmerControl(dep='f1',
          args=list(control=lmerControl(optimizer='bobyqa'))))
## (output not shown)

This approach allows random slopes for vowel and for vowel:timepoint to make it in, both of which significantly improve model fit. This model seems much more adequate for statistical inference.

Other options

Because buildmer does not do any model fitting by itself but is only an administrative formula processor around pre-existing modeling fuctions, it was straightforward to extend it beyond its original purpose of mixed-effects models. The logical extension of buildmer to GAMMs is fully supported, with appropriate safeguards against using likelihood-based tests for bam and gamm models in the generalized case, which use PQL (penalized quasi-likelihood). Relevant functions are available as buildbam, buildgam, buildgamm, and buildgamm4; for buildbam and buildgam, random effects in lme4 form are converted to s(...,bs='re') form automatically. glmmTMB models are also supported via function buildglmmTMB, although their syntax for covariance structures (e.g. diag(timepoint | participant)) is not; these models are still useful for their ability to handle autocorrelation, zero-inflation, and to use REML for GLMMs. From package nlme, gls models are supported via buildgls, lme models are supported via buildlme. At the request of Willemijn Heeren, buildmer was also extended to handle multinomial-logistic-regression models fitted by function multinom from package nnet; see function buildmultinom. buildmertree makes it possible to do term ordering and backward elimination of the random-effects part of glmertree models. buildGLMMadaptive works with function mixed_model from package GLMMadaptive. buildclmm uses functions clm and clmm from package ordinal. Finally, buildcustom allows you to write your own wrapper functions, making it possible to use the buildmer machinery with anything that accepts a formula.

References

Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255–278.
Bates, D., Kliegl, R., Vasishth, S., & Baayen, H. (2015). Parsimonious mixed models. arXiv Preprint arXiv:1506.04967.
Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., & Bates, D. (2017). Balancing Type I error and power in linear mixed models. Journal of Memory and Language, 94, 305–315.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.