The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Basic area-level model

The basic area-level model (Fay and Herriot 1979; Rao and Molina 2015) is given by \[ y_i | \theta_i \stackrel{\mathrm{iid}}{\sim} {\cal N} (\theta_i, \psi_i) \,, \\ \theta_i = \beta' x_i + v_i \,, \] where \(i\) runs from 1 to \(m\), the number of areas, \(\beta\) is a vector of regression coefficients for given covariates \(x_i\), and \(v_i \stackrel{\mathrm{iid}}{\sim} {\cal N} (0, \sigma_v^2)\) are independent random area effects. For each area an observation \(y_i\) is available with given variance \(\psi_i\).

First we generate some data according to this model:

m <- 75L  # number of areas
df <- data.frame(
  area=1:m,      # area indicator
  x=runif(m)     # covariate
)
v <- rnorm(m, sd=0.5)    # true area effects
theta <- 1 + 3*df$x + v  # quantity of interest
psi <- runif(m, 0.5, 2) / sample(1:25, m, replace=TRUE)  # given variances
df$y <- rnorm(m, theta, sqrt(psi))

A sampler function for a model with a regression component and a random intercept is created by

library(mcmcsae)
model <- y ~ reg(~ 1 + x, name="beta") + gen(factor = ~iid(area), name="v")
sampler <- create_sampler(model, sigma.fixed=TRUE, Q0=1/psi, linpred="fitted", data=df)

The meaning of the arguments used is as follows:

An MCMC simulation using this sampler function is then carried out as follows:

sim <- MCMCsim(sampler, store.all=TRUE, verbose=FALSE)

A summary of the results is obtained by

(summ <- summary(sim))
## llh_ :
##       Mean   SD t-value  MCSE q0.05  q0.5 q0.95 n_eff R_hat
## llh_ -26.7 6.01   -4.45 0.122 -37.2 -26.4 -17.3  2427     1
## 
## linpred_ :
##    Mean    SD t-value    MCSE q0.05 q0.5 q0.95 n_eff R_hat
## 1  1.97 0.203    9.68 0.00387 1.636 1.96  2.31  2752 1.001
## 2  1.98 0.221    8.93 0.00407 1.615 1.97  2.34  2954 1.000
## 3  2.76 0.249   11.12 0.00465 2.361 2.76  3.18  2857 0.999
## 4  2.11 0.470    4.49 0.00892 1.319 2.11  2.86  2773 1.000
## 5  2.34 0.177   13.24 0.00323 2.051 2.34  2.64  3000 1.001
## 6  3.84 0.238   16.14 0.00441 3.446 3.84  4.22  2910 1.000
## 7  3.07 0.179   17.16 0.00327 2.771 3.08  3.36  3000 0.999
## 8  1.66 0.253    6.57 0.00462 1.244 1.66  2.08  3000 0.999
## 9  1.31 0.241    5.45 0.00439 0.923 1.32  1.71  3000 0.999
## 10 3.62 0.295   12.30 0.00554 3.131 3.62  4.12  2825 0.999
## ... 65 elements suppressed ...
## 
## beta :
##             Mean    SD t-value    MCSE q0.05 q0.5 q0.95 n_eff R_hat
## (Intercept) 1.14 0.132    8.59 0.00242 0.918 1.14  1.35  3000     1
## x           2.84 0.229   12.39 0.00419 2.469 2.84  3.22  3000     1
## 
## v_sigma :
##          Mean     SD t-value    MCSE q0.05  q0.5 q0.95 n_eff R_hat
## v_sigma 0.501 0.0606    8.26 0.00151 0.408 0.497 0.604  1621     1
## 
## v :
##       Mean    SD t-value    MCSE   q0.05    q0.5   q0.95 n_eff R_hat
## 1  -0.3207 0.212 -1.5118 0.00411 -0.6678 -0.3244  0.0404  2662 1.001
## 2  -0.4823 0.227 -2.1249 0.00417 -0.8574 -0.4842 -0.1077  2963 0.999
## 3   0.3862 0.255  1.5167 0.00486 -0.0278  0.3845  0.8105  2742 0.999
## 4  -0.0203 0.464 -0.0438 0.00847 -0.8037 -0.0081  0.7119  3000 1.000
## 5   0.5788 0.192  3.0109 0.00351  0.2646  0.5796  0.8887  3000 1.000
## 6   0.3765 0.252  1.4959 0.00470 -0.0412  0.3767  0.7879  2869 1.000
## 7  -0.5813 0.205 -2.8342 0.00374 -0.9325 -0.5779 -0.2517  3000 0.999
## 8  -0.1300 0.261 -0.4977 0.00477 -0.5712 -0.1281  0.3025  3000 0.999
## 9  -0.3602 0.251 -1.4337 0.00459 -0.7792 -0.3642  0.0593  3000 0.999
## 10 -0.0229 0.305 -0.0752 0.00584 -0.5288 -0.0213  0.4789  2719 0.999
## ... 65 elements suppressed ...

In this example we can compare the model parameter estimates to the ‘true’ parameter values that have been used to generate the data. In the next plots we compare the estimated and ‘true’ random effects, as well as the model estimates and ‘true’ estimands. In the latter plot, the original ‘direct’ estimates are added as red triangles.

plot(v, summ$v[, "Mean"], xlab="true v", ylab="posterior mean"); abline(0, 1)
plot(theta, summ$linpred_[, "Mean"], xlab="true theta", ylab="estimated"); abline(0, 1)
points(theta, df$y, col=2, pch=2)

We can compute model selection measures DIC and WAIC by

compute_DIC(sim)
##       DIC     p_DIC 
## 104.29610  50.82223
compute_WAIC(sim, show.progress=FALSE)
##    WAIC1  p_WAIC1    WAIC2  p_WAIC2 
## 74.80406 21.33525 97.34085 32.60365

Posterior means of residuals can be extracted from the simulation output using method residuals. Here is a plot of (posterior means of) residuals against covariate \(x\):

plot(df$x, residuals(sim, mean.only=TRUE), xlab="x", ylab="residual"); abline(h=0)

A linear predictor in a linear model can be expressed as a weighted sum of the response variable. If we set compute.weights=TRUE then such weights are computed for all linear predictors specified in argument linpred. In this case it means that a set of weights is computed for each area.

sampler <- create_sampler(model, sigma.fixed=TRUE, Q0=1/psi,
             linpred="fitted", data=df, compute.weights=TRUE)
sim <- MCMCsim(sampler, store.all=TRUE, verbose=FALSE)

Now the weights method returns a matrix of weights, in this case a 75 \(\times\) 75 matrix \(w_{ij}\) holding the weight of direct estimate \(i\) in linear predictor \(j\). To verify that the weights applied to the direct estimates yield the model-based estimates we plot them against each other. Also shown is a plot of the weight of the direct estimate for each area in the predictor for that same area, against the variance of the direct estimate.

plot(summ$linpred_[, "Mean"], crossprod(weights(sim), df$y),
     xlab="estimate", ylab="weighted average")
abline(0, 1)
plot(psi, diag(weights(sim)), ylab="weight")

References

Fay, R. E., and R. A. Herriot. 1979. “Estimates of Income for Small Places: An Application of James-Stein Procedures to Census Data.” Journal of the American Statistical Association 74 (366): 269–77.
Rao, J. N. K., and I. Molina. 2015. Small Area Estimation. John Wiley & Sons.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.