The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

This vignette provides an overview of the bayesMeanScale package, which is designed to compute model predictions, marginal effects, and comparisons of marginal effects for several different fixed-effect generalized linear models fit using the rstanarm package (https://mc-stan.org/rstanarm/). In particular, these statistics are computed on the mean scale rather than the link scale for easier interpretation. For example, rather than working on the log-odds scale for a logistic regression, we focus on the probability scale.

To get to the mean scale, bayesMeanScale takes a random sample with replacement from the joint posterior distribution. Then, this matrix is multiplied by the adjusted data matrix (adjusted according to your values of interest). Finally, the inverse link function is applied to transform the predictions to the mean scale.

Predictions are computed by holding one or more explanatory variables fixed at particular values and either averaging over the rows of the data (average marginal predictions) or holding all other covariates at their means (marginal predictions at the mean). Marginal effects can also be calculated by averaging over the data (average marginal effect) or holding covariates at their means (marginal effect at the mean).

Currently, the effects can only be specified in terms of discrete changes. For a continuous variable, this might mean looking at the difference between the mean and the mean plus 1 standard deviation. In statistical applications, this sort of strategy is often very useful for summarizing a model.

The third workhorse function of the package compares marginal effects against each other. This is particularly useful for testing non-linear interaction effects such as those that appear in generalized linear models that do not use the identity link.

Predictions

Average marginal predictions

The examples in this vignette use the wells data from the rstanarm package. Use ?rstanarm::wells to view the documentation on this dataset.

For average marginal predictions, the goal is to get predictions at important settings of one or more of the model explanatory variables. These predictions are then averaged over the rows of the data.

lapply(c('bayesMeanScale', 'rstanarm', 'flextable', 'magrittr'), function(x) library(x, character.only=T))
# Simulate the data #

modelData       <- rstanarm::wells
modelData$assoc <- ifelse(modelData$assoc==1, 'Y', 'N')

binomialModel <- stan_glm(switch ~ dist*educ + arsenic + I(arsenic^2) + assoc, 
                          data    = modelData, 
                          family  = binomial, 
                          refresh = 0)
bayesPredsF(binomialModel, 
            at = list(arsenic = c(.82, 1.3, 2.2), assoc=c("Y", "N")))
##  arsenic assoc   mean  lower  upper
##     0.82     Y 0.4541 0.4214 0.4855
##     1.30     Y 0.5348 0.5028 0.5610
##     2.20     Y 0.6556 0.6233 0.6856
##     0.82     N 0.4846 0.4575 0.5124
##     1.30     N 0.5650 0.5403 0.5884
##     2.20     N 0.6826 0.6577 0.7110

The output contains the unique values for the “at” variables, the posterior means, and the lower and upper bounds of the credible intervals.

Marginal predictions at the mean

For marginal predictions at the mean, the goal is essentially the same except that we want to hold the covariates at their means. In the example below, all explanatory variables except for “arsenic” are held at their means for the computation. Since “assoc” is a discrete variable, we hold it at the proportion of cases that equaled “Y”.

bayesPredsF(binomialModel, 
            at       = list(arsenic = c(.82, 1.3, 2.2), assoc=c("Y", "N")), 
            at_means = T)
##  arsenic assoc   mean  lower  upper
##     0.82     Y 0.4485 0.4193 0.4815
##     1.30     Y 0.5329 0.5028 0.5620
##     2.20     Y 0.6598 0.6264 0.6917
##     0.82     N 0.4804 0.4521 0.5097
##     1.30     N 0.5647 0.5397 0.5879
##     2.20     N 0.6880 0.6608 0.7158

The results are slightly different than the average marginal predictions. From a computational standpoint, setting “at_means” to “TRUE” makes for a substantially faster computation. For relatively small models, this speed advantage is likely trivial, but it can make a noticeable difference when working with big data models.

Average marginal predictions for count probabilities

You can also get the predictions for the count probabilities from a poisson or negative binomial model. Here, rather than looking at the rate, or mean, parameter, we investigate the probabilities of particular counts. This is an effective approach for summarizing count models in more depth.

crabs <- read.table("https://users.stat.ufl.edu/~aa/cat/data/Crabs.dat", header=T)

poissonModel  <- stan_glm(sat ~ weight + width, 
                          data    = crabs, 
                          family  = poisson, 
                          refresh = 0)

bayesCountPredsF(poissonModel,
                 counts = c(0,1,2),
                 at     = list(weight=c(2,3,4)))
##  weight count   mean  lower  upper
##       2     0 0.1103 0.0738 0.1478
##       3     0 0.0342 0.0115 0.0611
##       4     0 0.0097 0.0000 0.0376
##       2     1 0.2368 0.1864 0.2843
##       3     1 0.1092 0.0588 0.1650
##       4     1 0.0363 0.0003 0.1120
##       2     2 0.2596 0.2364 0.2707
##       3     2 0.1809 0.1315 0.2234
##       4     2 0.0751 0.0013 0.1749

Marginal effects

Average marginal effects

The concept of average marginal effects is straightforward. We simply take two posterior distributions of average marginal predictions and subtract one from the other. In the example below, we see that the average expected probability of switching wells for families that had an arsenic level of 2.2 is roughly 27 percentage points greater than for a family that had an arsenic level of .82.

binomialAME <- bayesMargEffF(binomialModel,
                             marginal_effect = 'arsenic',
                             start_value     = 2.2,
                             end_value       = .82)

binomialAME
##  marg_effect start_val end_val   mean  lower  upper
##      arsenic       2.2    0.82 0.2669 0.2184 0.3178
head(binomialAME$diffDraws)
##         diff   comparison marg_effect
##        <num>       <char>      <char>
## 1: 0.3014283 2.2 vs. 0.82     arsenic
## 2: 0.2118536 2.2 vs. 0.82     arsenic
## 3: 0.2924712 2.2 vs. 0.82     arsenic
## 4: 0.2693329 2.2 vs. 0.82     arsenic
## 5: 0.2781593 2.2 vs. 0.82     arsenic
## 6: 0.2782404 2.2 vs. 0.82     arsenic

Also note that we can access the posterior distribution of the marginal effects. This can be useful for graphing purposes and to describe the marginal effect distributions in more detail.

We can also compute multiple marginal effects. When doing so, it is necessary to specify the start and end values in a list.

bayesMargEffF(binomialModel,
              marginal_effect = c('arsenic', 'dist'),
              start_value     = list(2.2, 64.041),
              end_value       = list(.82, 21.117))
##  marg_effect start_val end_val    mean   lower   upper
##      arsenic     2.200   0.820  0.2663  0.2174  0.3187
##         dist    64.041  21.117 -0.0961 -0.1152 -0.0758

The “at” argument allows the user to specify particular values for one or more covariates, and they must be specified in a list. The example below specifies at values for “educ” given that we have an interaction between “dist” and “educ” in the model. If there is an interaction effect on the mean (probability) scale, we would expect the marginal effect of “dist” to be different at various levels of “educ.” The final section will cover how to test these marginal effects against each other.

binomialAMEInteraction <- bayesMargEffF(binomialModel,
                                        marginal_effect = 'dist',
                                        start_value     = 64.041,
                                        end_value       = 21.117,
                                        at              = list(educ=c(0, 5, 8)))

binomialAMEInteraction
##  educ marg_effect start_val end_val    mean   lower   upper
##     0        dist    64.041  21.117 -0.1436 -0.1791 -0.1151
##     5        dist    64.041  21.117 -0.0948 -0.1154 -0.0756
##     8        dist    64.041  21.117 -0.0657 -0.0914 -0.0437

Average marginal effects for count probabilities

You can also get the marginal effects for the count probabilities from a poisson or negative binomial model.

countMarg <- bayesCountMargEffF(poissonModel,
                                counts          = c(0,1,2),
                                marginal_effect = 'width',
                                start_value     = 25,
                                end_value       = 20,
                                at              = list(weight=c(2,3,4)))

countMarg
##  weight count marg_effect start_val end_val    mean   lower  upper
##       2     0       width        25      20 -0.0746 -0.2167 0.0537
##       3     0       width        25      20 -0.0555 -0.1752 0.0115
##       4     0       width        25      20 -0.0343 -0.1464 0.0008
##       2     1       width        25      20 -0.0499 -0.1281 0.0492
##       3     1       width        25      20 -0.0686 -0.1558 0.0291
##       4     1       width        25      20 -0.0594 -0.1690 0.0039
##       2     2       width        25      20  0.0186 -0.0077 0.0696
##       3     2       width        25      20 -0.0212 -0.0664 0.0508
##       4     2       width        25      20 -0.0497 -0.1184 0.0093

Marginal effects at the mean

Marginal effects at the mean compute the differences while holding the covariates at their means. Like the marginal predictions at the mean, specifying “at_means=TRUE” allows for a much faster computation than specifying “at_means=F”.

binomialMEMInteraction <- bayesMargEffF(binomialModel,
                                        marginal_effect = 'dist',
                                        start_value     = 64.041,
                                        end_value       = 21.117,
                                        at              = list(educ=c(0, 5, 8)),
                                        at_means        = T)

binomialMEMInteraction
##  educ marg_effect start_val end_val    mean   lower   upper
##     0        dist    64.041  21.117 -0.1528 -0.1883 -0.1212
##     5        dist    64.041  21.117 -0.1003 -0.1224 -0.0794
##     8        dist    64.041  21.117 -0.0693 -0.0959 -0.0439

Comparing marginal effects

After computing multiple marginal effects for a model, you might like to compare them against one another. The “bayesMargCompareF” function calculates tests for all the unique pairs of marginal effects that you computed with “bayesMargEffF”. In the example below, we are able to investigate the interaction effect between “dist” and “educ”. We see that the marginal effect of “dist” is meaningfully different at different levels of “educ”.

bayesMargCompareF(binomialAMEInteraction)
##  educ1       comparison1 marg_effect1 educ2       comparison2 marg_effect2
##      5 64.041 vs. 21.117         dist     0 64.041 vs. 21.117         dist
##      8 64.041 vs. 21.117         dist     0 64.041 vs. 21.117         dist
##      8 64.041 vs. 21.117         dist     5 64.041 vs. 21.117         dist
##    mean  lower  upper
##  0.0488 0.0247 0.0729
##  0.0779 0.0394 0.1159
##  0.0291 0.0148 0.0431

You can also compare the marginal effects on count probabilities, shown in the example below.

bayesMargCompareF(countMarg)
##  weight1 count1 comparison1 marg_effect1 weight2 count2 comparison2
##        3      0   25 vs. 20        width       2      0   25 vs. 20
##        4      0   25 vs. 20        width       2      0   25 vs. 20
##        4      0   25 vs. 20        width       3      0   25 vs. 20
##        2      1   25 vs. 20        width       2      0   25 vs. 20
##        2      1   25 vs. 20        width       3      0   25 vs. 20
##        2      1   25 vs. 20        width       4      0   25 vs. 20
##        3      1   25 vs. 20        width       2      0   25 vs. 20
##        3      1   25 vs. 20        width       3      0   25 vs. 20
##        3      1   25 vs. 20        width       4      0   25 vs. 20
##        3      1   25 vs. 20        width       2      1   25 vs. 20
##        4      1   25 vs. 20        width       2      0   25 vs. 20
##        4      1   25 vs. 20        width       3      0   25 vs. 20
##        4      1   25 vs. 20        width       4      0   25 vs. 20
##        4      1   25 vs. 20        width       2      1   25 vs. 20
##        4      1   25 vs. 20        width       3      1   25 vs. 20
##        2      2   25 vs. 20        width       2      0   25 vs. 20
##        2      2   25 vs. 20        width       3      0   25 vs. 20
##        2      2   25 vs. 20        width       4      0   25 vs. 20
##        2      2   25 vs. 20        width       2      1   25 vs. 20
##        2      2   25 vs. 20        width       3      1   25 vs. 20
##        2      2   25 vs. 20        width       4      1   25 vs. 20
##        3      2   25 vs. 20        width       2      0   25 vs. 20
##        3      2   25 vs. 20        width       3      0   25 vs. 20
##        3      2   25 vs. 20        width       4      0   25 vs. 20
##        3      2   25 vs. 20        width       2      1   25 vs. 20
##        3      2   25 vs. 20        width       3      1   25 vs. 20
##        3      2   25 vs. 20        width       4      1   25 vs. 20
##        3      2   25 vs. 20        width       2      2   25 vs. 20
##        4      2   25 vs. 20        width       2      0   25 vs. 20
##        4      2   25 vs. 20        width       3      0   25 vs. 20
##        4      2   25 vs. 20        width       4      0   25 vs. 20
##        4      2   25 vs. 20        width       2      1   25 vs. 20
##        4      2   25 vs. 20        width       3      1   25 vs. 20
##        4      2   25 vs. 20        width       4      1   25 vs. 20
##        4      2   25 vs. 20        width       2      2   25 vs. 20
##        4      2   25 vs. 20        width       3      2   25 vs. 20
##  marg_effect2    mean   lower  upper
##         width  0.0192 -0.0326 0.0462
##         width  0.0404 -0.0412 0.0985
##         width  0.0212 -0.0100 0.0478
##         width  0.0248 -0.0086 0.0972
##         width  0.0056 -0.0327 0.0796
##         width -0.0156 -0.0642 0.0694
##         width  0.0061 -0.0263 0.0596
##         width -0.0131 -0.0456 0.0300
##         width -0.0343 -0.0863 0.0347
##         width -0.0187 -0.0522 0.0033
##         width  0.0153 -0.0530 0.0469
##         width -0.0039 -0.0304 0.0083
##         width -0.0251 -0.0711 0.0032
##         width -0.0095 -0.0716 0.0333
##         width  0.0092 -0.0311 0.0397
##         width  0.0932 -0.0465 0.2681
##         width  0.0740 -0.0104 0.2387
##         width  0.0528 -0.0036 0.2090
##         width  0.0684 -0.0528 0.1784
##         width  0.0871 -0.0247 0.2139
##         width  0.0779 -0.0030 0.2319
##         width  0.0535 -0.0092 0.2179
##         width  0.0343 -0.0320 0.1888
##         width  0.0131 -0.0529 0.1637
##         width  0.0287 -0.0183 0.1361
##         width  0.0474 -0.0043 0.1631
##         width  0.0382 -0.0363 0.1872
##         width -0.0398 -0.0888 0.0321
##         width  0.0250 -0.0747 0.1602
##         width  0.0058 -0.0394 0.1344
##         width -0.0154 -0.0871 0.1057
##         width  0.0002 -0.0737 0.0955
##         width  0.0189 -0.0384 0.1072
##         width  0.0097 -0.0360 0.1334
##         width -0.0683 -0.1538 0.0072
##         width -0.0285 -0.0809 0.0140

List of models currently supported


Class

Family

Links

stanreg

beta

logit; probit; cloglog

stanreg

binomial

logit; probit; cloglog

stanreg

Gamma

inverse; log; identity

stanreg

gaussian

identity

stanreg

neg_binomial_2

identity; log; sqrt

stanreg

poisson

identity; log; sqrt

References

Agresti, Alan. 2013. Categorical Data Analysis. Third Edition. New York: Wiley

Long, J. Scott and Jeremy Freese. 2001. “Predicted Probabilities for Count Models.” Stata Journal 1(1): 51-57.

Long, J. Scott and Sarah A. Mustillo. 2018. “Using Predictions and Marginal Effects to Compare Groups in Regression Models for Binary Outcomes.” Sociological Methods & Research 50(3): 1284-1320.

Mize, Trenton D. 2019. “Best Practices for Estimating, Interpreting, and Presenting Non-linear Interaction Effects.” Sociological Science 6: 81-117.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.