The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

1 Introduction

Weighted Quantile Sum (WQS) regression is a statistical approach for multivariate regression in high-dimensional data with complex correlation patterns commonly encountered in environmental exposures, epi/genomics, and metabolomic studies, among others. The model constructs an empirically weighted index estimating the mixture effect of predictor (e.g., exposure) variables on an outcome, which may then be used in a regression model with relevant covariates to test the association of the index with a dependent variable or outcome. The contribution of each individual predictor to the overall index effect may is assessed by the relative strength of the estimated weights since the components are quantiled and are therefore on the same scale.

The gWQS package extends WQS regression to applications with continuous and categorical outcomes and implements the random subset WQS and the repeated holdout WQS. In practical terms, the primary outputs of an analysis are the parameter estimates and significance tests for the overall index effect of predictor variables, and the estimated weights assigned to each predictor, which identify the relevant contribution of each variable to the relationship between the WQS index and the outcome variable.

For additional theoretical background on WQS regression and its extensions, see the references provided below.

2 How to use the gWQS package

The main functions of the gWQS package are gwqs and gwqs_multinom. The first extends WQS regression to applications with continuous, categorical and count outcomes; the second extends WQS regression to applications with categorical outcomes with more than 2 non-ordinal categories while both functions include the option rs that allows to apply a random subset implementation of WQS and the argument rh that if set greater than 1 implements a repeated holdout validation procedure. In this vignette we will only show the application of WQS to a continuous outcome. We created the wqs_data dataset (available once the package is installed and loaded) to show how to use this function. These data reflect 59 exposure concentrations simulated from a distribution of 34 PCB exposures and 25 phthalate biomarkers measured in subjects participating in the NHANES study (2001-2002). Additionally, 8 outcome measures were simulated applying different distributions and fixed beta coefficients to the predictors. In particular y and yLBX were simulated from a normal distribution, ybin and ybinLBX from a binomial distribution, ymultinom and ymultinomLBX from a multinomial distribution and ycount and ycountLBX from a Poisson distribution. The sex variable was also simulated to allow to adjust for a covariate in the model (see the wqs_data help page for more details). This dataset can thus be used to test the gWQS package by analyzing the mixture effect of the simulated chemicals on the different outcomes, with adjustments for covariates.

2.1 Example

2.1.1 Step 1

Following the algorithm proposed in Renzetti et al. (2023) we start fitting a WQS regression with two indices, one exploring the positive and the second exploring the negative direction specifying the terms pwqs and nwqs, respectively. We also opted for the repeated holdout validation procedure to get more stable results. The following script calls a WQS model for a continuous outcome using the function gwqs that returns an object of class gwqs; the three functions gwqs_barplot, gwqs_scatterplot and gwqs_fitted_vs_resid allows to plot the figures shown in figure 2.1:

library(gWQS)
## Welcome to Weighted Quantile Sum (WQS) Regression.
## If you are using a Mac you have to install XQuartz.
## You can download it from: https://www.xquartz.org/
library(ggplot2)
library(knitr)
library(kableExtra)
library(reshape2)

# we save the names of the mixture variables in the variable "PCBs"
PCBs <- names(wqs_data)[1:34]
# we run the model and save the results in the variable "results2i"
results2i <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, 
                  q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5,
                  family = "gaussian", seed = 2016)

This WQS model tests the relationship between our dependent variable, y, and a WQS index estimated from ranking exposure concentrations in deciles (q = 10); in the gwqs formula the wqs (when we want to build a single index) or the pwqs and nwqs (when we want to build a double index) terms must be included as if they were present in the dataset. The data were divided in 40% of the dataset for training and 60% for validation (validation = 0.6) and we repeated this split procedure 5 times (rh = 5), and 5 bootstrap samples (b = 5) for parameter estimation were assigned (in practical applications we suggest at least 100 repeated holdout validation and 100 bootstrap samples to be used). We first examined a bidirectional association including both the positive (pwqs) and negative (nwqs) indices. We linked our model to a gaussian distribution to test for relationships between the continuous outcome and exposures (family = "gaussian", other families available within the gwqs function are "binomial", "poisson", "quasipoisson" and "negbin" while the function gwqs_multinom allows to fit a WQS regression for multinomial outcomes), and fixed the seed to 2016 for reproducible results (seed = 2016).

To test the statistical significance of the association between the variables in the model, the following code has to be run as for a classical R regression function:

summary(results2i)
## 
## Call:
## gwqs(formula = yLBX ~ pwqs + nwqs, data = wqs_data, mix_name = PCBs, 
##     rh = 5, b = 5, b1_pos = TRUE, q = 10, validation = 0.6, family = "gaussian", 
##     seed = 2016)
## 
## 
## Coefficients:
##             Estimate Std. Error    2.5 % 97.5 %
## (Intercept) -4.58713    0.32556 -5.22523 -3.949
## pwqs         1.10822    0.10478  0.90286  1.314
## nwqs        -0.06928    0.05128 -0.16979  0.031
## 
## (Mean dispersion parameter for gaussian family taken to be 1.194416)
## 
##     Mean null deviance: 633.40  on 301  degrees of freedom
## Mean residual deviance: 352.68  on 299  degrees of freedom
## 
## Mean AIC:  902.91

From this output we can observe a statistically significant association in the positive (\(\beta\) = 1.108 95%CI 0.903, 1.314) but not in the negative direction (\(\beta\) = -0.069 95%CI -0.170, 0.031). We can also draw some useful plots using the following secondary functions built in the gWQS package:

# bar plot
gwqs_barplot(results2i)
# scatter plot y vs wqs
gwqs_scatterplot(results2i)
# scatter plot residuals vs fitted values
gwqs_fitted_vs_resid(results2i)
# boxplot of the weights estimated at each repeated holdout step
gwqs_boxplot(results2i)
Plots displayed for linear outcomes.

Figure 2.1: Plots displayed for linear outcomes.

Figure 2.1 A is a barplot showing the weights assigned to each variable ordered from the highest weight to the lowest of the positive index (by default). These results indicate that the variables LBXF07LA, LBX138LA and LBXD02LA are the largest contributors to this mixture effect in the positive direction while we have different contributors in the negative direction but there was not a significant association. The dashed red line represents the cutoff \(\tau\) (by default equal to the inverse of the number of elements in the mixture as suggested in Carrico et al. 2014) to discriminate which elements are of most importance.

In plot B of figure 2.1 we have a representation of the wqs index vs the outcome (adjusted for the model residual when covariates are included in the model) that shows the direction and the shape of the association between the exposure and the outcome. For example, in this case we observe a linear and positive relationship between the mixture and the yLBX variable in a positive direction and an almost horizontal line in the negative direction (meaning there is no negative effect of the mixture on the outcome).

In plot C a diagnostic graph of the residuals vs the fitted values is shown to check if they are randomly spread around zero or if there is a trend. All these plots are built using the ggplot2 package.

In plot D the boxplots of the distribution of the weights estimated for each repeated holdout is displayed.

To have the exact values of the estimated weights we can apply the command results2i$final_weights. The following code shows the first six highest weights; the full list of weights can be called by omitting the head function:

head(results2i$final_weights)
##          mix_name Estimate pos   2.5% pos  97.5% pos Estimate neg    2.5% neg
## LBXF07LA LBXF07LA   0.16729739 0.10844549 0.19962573   0.02149915 0.011537008
## LBX138LA LBX138LA   0.14928055 0.10519286 0.19051842   0.01540007 0.005008844
## LBXD02LA LBXD02LA   0.12407915 0.09151787 0.13531136   0.01765891 0.008046490
## LBX105LA LBX105LA   0.07756679 0.05474206 0.10227365   0.01543074 0.008554874
## LBXF06LA LBXF06LA   0.06283293 0.03878633 0.07848501   0.01011549 0.005945021
## LBX157LA LBX157LA   0.05488138 0.01329546 0.09745631   0.01486860 0.003005004
##           97.5% neg
## LBXF07LA 0.04110117
## LBX138LA 0.02550952
## LBXD02LA 0.03442848
## LBX105LA 0.03382734
## LBXF06LA 0.01763744
## LBX157LA 0.02829744

These same tables are also shown in the Viewer window through the functions gwqs_summary_tab and gwqs_weights_tab respectively. Both these two functions use the package kableExtra to produce the output. The output (table 2.1 and 2.2) and respective code is shown below:

gwqs_summary_tab(results2i)
Table 2.1: Summary results of the WQS regression for linear outcomes.
Estimate Std. Error 2.5 % 97.5 %
(Intercept) -4.5900 0.3260 -5.230 -3.9500
pwqs 1.1100 0.1050 0.903 1.3100
nwqs -0.0693 0.0513 -0.170 0.0312
mf_df <- as.data.frame(signif(coef(summary(results2i)), 3))
kable_styling(kable(mf_df, row.names = TRUE))
gwqs_weights_tab(results2i)
Table 2.2: Weights table of the WQS regression for linear outcomes.
mix_name Estimate pos 2.5% pos 97.5% pos Estimate neg 2.5% neg 97.5% neg
LBXF07LA 0.16700 1.08e-01 0.20000 0.0215 0.01150 0.0411
LBX138LA 0.14900 1.05e-01 0.19100 0.0154 0.00501 0.0255
LBXD02LA 0.12400 9.15e-02 0.13500 0.0177 0.00805 0.0344
LBX105LA 0.07760 5.47e-02 0.10200 0.0154 0.00855 0.0338
LBXF06LA 0.06280 3.88e-02 0.07850 0.0101 0.00595 0.0176
LBX157LA 0.05490 1.33e-02 0.09750 0.0149 0.00301 0.0283
LBX170LA 0.02490 1.27e-02 0.04460 0.0184 0.00401 0.0348
LBXD04LA 0.02480 6.20e-03 0.04010 0.0202 0.00626 0.0362
LBXF05LA 0.02390 1.01e-02 0.03790 0.0285 0.00554 0.0756
LBX167LA 0.02070 8.67e-03 0.03280 0.0155 0.00628 0.0263
LBX074LA 0.02040 2.47e-03 0.02920 0.0191 0.00758 0.0335
LBX099LA 0.01950 8.04e-03 0.02730 0.0176 0.00717 0.0303
LBXF04LA 0.01920 3.32e-03 0.04020 0.0368 0.00429 0.0610
LBX187LA 0.01880 3.88e-03 0.03020 0.0233 0.01120 0.0411
LBXF08LA 0.01800 6.70e-03 0.02750 0.0291 0.01240 0.0419
LBX153LA 0.01780 6.64e-03 0.02850 0.0253 0.01090 0.0479
LBX180LA 0.01460 4.62e-03 0.03290 0.0314 0.00137 0.0781
LBXD05LA 0.01430 5.96e-03 0.02790 0.0311 0.01500 0.0520
LBX194LA 0.01300 1.14e-03 0.02730 0.0212 0.00638 0.0522
LBXD01LA 0.01270 3.87e-03 0.03040 0.0297 0.00106 0.0866
LBX118LA 0.01140 2.04e-03 0.02980 0.0489 0.01800 0.1350
LBXPCBLA 0.01060 1.73e-03 0.02890 0.0291 0.00416 0.0470
LBXHXCLA 0.01030 2.15e-03 0.02110 0.0329 0.00232 0.0707
LBXD03LA 0.00954 1.38e-03 0.01660 0.0287 0.00806 0.0798
LBX189LA 0.00837 4.94e-03 0.01050 0.0254 0.00824 0.0565
LBXF01LA 0.00754 8.33e-04 0.01520 0.0406 0.00287 0.0815
LBX199LA 0.00736 5.86e-04 0.01240 0.0429 0.03350 0.0587
LBXF09LA 0.00619 2.51e-04 0.01500 0.0446 0.00398 0.1010
LBXD07LA 0.00594 4.54e-04 0.01830 0.0260 0.00694 0.0453
LBXF03LA 0.00590 9.17e-04 0.01060 0.0464 0.01350 0.0730
LBX196LA 0.00579 7.18e-05 0.01090 0.0667 0.01260 0.1470
LBXF02LA 0.00568 5.19e-05 0.01300 0.0475 0.02860 0.0759
LBX156LA 0.00511 6.05e-04 0.00846 0.0337 0.00758 0.0511
LBXTCDLA 0.00178 6.61e-05 0.00405 0.0444 0.02300 0.0640


final_weight <- results2i$final_weights
final_weight[, -1] <- signif(final_weight[, -1], 3)
kable_styling(kable(final_weight, row.names = FALSE))

The gwqs function gives back other outputs. If a repeated holdout was performed, we obtain the dataset y_wqs_df that contains the dependent variable (adjusted for other covariates if present in the model) and the wqs or the pwqs and nwqs indices depending on how the model was specified; the dataset used in the analysis that includes the WQS indices specified in the model (results2i$data); the list of vectors containing the cutoffs used to determine the quantiles of each variable in the mixture (results2i$qi); a matrix or a list of two matrices (if two indices were included in the model) with all the estimated weights for each repeated holdout; the list of the output from the repeated WQS regressions (results2i$gwqslist). Each element of the list contains the information that we would obtain from a WQS regression with single split: the vector of the values that indicate whether the solver converged (0) or not (1) (results2i$gwqslist[[1]]$conv, the specified value [[1]] allows to access the information of the first model); the matrix or the list of two matrices (if two indices were included in the model) with all the estimated weights and the associated \(\beta_1\), standard errors, statistics and p-values for each bootstrap sample (results2i$gwqslist[[1]]$bres); the list of vectors containing the rows of the subjects included in each bootstrap dataset (results2i$gwqslist[[1]]$bindex); a logical vector that identifies the rows used to estimate the parameters of the final model (results2i$gwqslist[[1]]$vindex); the vector of the values of the objective function at the optimal parameter estimates obtained at each bootstrap step (results2i$gwqslist[[1]]$objfn_values) and any messages from the optim function (results2i$gwqslist[[1]]$optim_messages).

The following script allows to reproduce the figures that are automatically generated using the plots functions:

# bar plot
w_ord <- order(results2i$final_weights$`Estimate pos`)
mean_weight_pos <- results2i$final_weights$`Estimate pos`[w_ord]
mean_weight_neg <- results2i$final_weights$`Estimate neg`[w_ord]
mix_name <- factor(results2i$final_weights$mix_name[w_ord], 
                   levels = results2i$final_weights$mix_name[w_ord])
data_plot <- data.frame(mean_weight = c(mean_weight_pos, mean_weight_neg), 
                        mix_name = rep(mix_name, 2),
                        index = factor(rep(c("pwqs", "nwqs"), each = length(w_ord)), 
                                       levels = c("pwqs", "nwqs")))
ggplot(data_plot, aes(x = mix_name, y = mean_weight)) + 
  geom_bar(stat = "identity", color = "black") + theme_bw() +
  theme(axis.ticks = element_blank(),
        axis.title = element_blank(),
        axis.text.x = element_text(color='black'),
        legend.position = "none") + coord_flip() + 
  geom_hline(yintercept = 1/length(PCBs), linetype="dashed", color = "red") +
  facet_wrap(~ index)
#
# scatter plot y vs wqs
ggplot(melt(results2i$y_wqs_df, measure.vars = c("pwqs", "nwqs")), aes(value, y_adj)) + 
  geom_point() + facet_wrap(~ variable) + xlab("wqs") + 
  stat_smooth(method = "loess", se = FALSE, linewidth = 1.5) + theme_bw() 
#
# scatter plot residuals vs fitted values
fit_df <- data.frame(fitted = fitted(results2i), 
                     resid = residuals(results2i, type = "response"))
res_vs_fitted <- ggplot(fit_df, aes(x = fitted, y = resid)) + geom_point() + 
  theme_bw() + xlab("Fitted values") + ylab("Residuals")

2.1.2 Step 2

We then run three WQS regressions setting three different shrinkage parameter values: 1. one equal to the magnitude of the AIC of the regression fitted at step 1 2. one equal to a lower order of magnitude 3. one equal to a greater order of magnitude Additional models can be fitted following a bisection algorithm setting lambda between the values with the two smallest AICs.

# we run the model setting the penalization term equal to 90
results2i_l90 <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, 
                    q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5,
                    family = "gaussian", seed = 2016, lambda = 90)

# we run the model setting the penalization term equal to 900
results2i_l900 <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, 
                     q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5,
                     family = "gaussian", seed = 2016, lambda = 900)

# we run the model setting the penalization term equal to 900
results2i_l9000 <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, 
                      q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5,
                      family = "gaussian", seed = 2016, lambda = 9000)

Based on the results obtained from this analysis we can choose the value of lambda that minimizes the AIC. In our case, lambda = 90 is the optimal value as shown by the table below:

lambda_AIC_2i <- data.frame(lambda = c(0, 90, 900, 9000),
                            AIC = c(results2i$fit$aic, results2i_l90$fit$aic, 
                                    results2i_l900$fit$aic, results2i_l9000$fit$aic))
kable(lambda_AIC_2i) %>% kable_styling()
lambda AIC
0 902.9091
90 892.4126
900 901.4783
9000 960.5558

The results still show a significant positive association and a non-significant negative effect of the mixture on the outcome:

summary(results2i_l90)
## 
## Call:
## gwqs(formula = yLBX ~ pwqs + nwqs, data = wqs_data, mix_name = PCBs, 
##     rh = 5, b = 5, b1_pos = TRUE, q = 10, validation = 0.6, family = "gaussian", 
##     seed = 2016, lambda = 90)
## 
## 
## Coefficients:
##              Estimate Std. Error     2.5 % 97.5 %
## (Intercept) -4.130048   0.374702 -4.864464 -3.396
## pwqs         0.944386   0.095736  0.756743  1.132
## nwqs        -0.007178   0.020509 -0.047376  0.033
## 
## (Mean dispersion parameter for gaussian family taken to be 1.154911)
## 
##     Mean null deviance: 633.40  on 301  degrees of freedom
## Mean residual deviance: 340.16  on 299  degrees of freedom
## 
## Mean AIC:  892.41

The identification and the ranking of the signnificant weights did not changed compared to the non-penalized WQS but we can appreciate how the non relevant weights have shrunk towards zero

gwqs_barplot(results2i_l90)

2.1.3 Step 3

Since the mixture did not show a negative association with the outcome we can fit a final model with a single positive index and follow the same procedure:

results1i <- gwqs(yLBX ~ wqs, mix_name = PCBs, data = wqs_data, 
                  q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5,
                  family = "gaussian", seed = 2016)

# we run the model setting the penalization term equal to 90
results1i_l90 <- gwqs(yLBX ~ wqs, mix_name = PCBs, data = wqs_data, 
                    q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5,
                    family = "gaussian", seed = 2016, lambda = 90)

# we run the model setting the penalization term equal to 900
results1i_l900 <- gwqs(yLBX ~ wqs, mix_name = PCBs, data = wqs_data, 
                     q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5,
                     family = "gaussian", seed = 2016, lambda = 900)

# we run the model setting the penalization term equal to 900
results1i_l9000 <- gwqs(yLBX ~ wqs, mix_name = PCBs, data = wqs_data, 
                      q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5,
                      family = "gaussian", seed = 2016, lambda = 9000)

Based on the results obtained also in this case we can observe that when lambda = 90 we have the lower AIC:

lambda_AIC_1i <- data.frame(lambda = c(0, 90, 900, 9000),
                            AIC = c(results1i$fit$aic, results1i_l90$fit$aic, 
                                    results1i_l900$fit$aic, results1i_l9000$fit$aic))
kable(lambda_AIC_1i) %>% kable_styling()
lambda AIC
0 900.7376
90 891.5712
900 898.0019
9000 956.8478

We can then choose the WQS regression with single index and a penalization term set to lambda = 90 with results displayed below and confirm a positive significant association of the mixture with the outcome and LBXF07LA, LBX138LA and LBXD02LA as the elements that contribute the most in this association:

summary(results1i_l90)
## 
## Call:
## gwqs(formula = yLBX ~ wqs, data = wqs_data, mix_name = PCBs, 
##     rh = 5, b = 5, b1_pos = TRUE, q = 10, validation = 0.6, family = "gaussian", 
##     seed = 2016, lambda = 90)
## 
## 
## Coefficients:
##             Estimate Std. Error    2.5 % 97.5 %
## (Intercept) -4.09652    0.29922 -4.68299 -3.510
## wqs          0.92952    0.07201  0.78839  1.071
## 
## (Mean dispersion parameter for gaussian family taken to be 1.153663)
## 
##     Mean null deviance: 633.40  on 301  degrees of freedom
## Mean residual deviance: 341.16  on 300  degrees of freedom
## 
## Mean AIC:  891.57
gwqs_barplot(results1i_l90)

3 References

Carrico C, Gennings C, Wheeler D, Factor-Litvak P. Characterization of a weighted quantile sum regression for highly correlated data in a risk analysis setting. J Agricul Biol Environ Stat. 2014:1-21. ISSN: 1085-7117. DOI: 10.1007/ s13253-014-0180-3. http://dx.doi.org/10.1007/s13253-014-0180-3.

Czarnota J, Gennings C, Colt JS, De Roos AJ, Cerhan JR, Severson RK, Hartge P, Ward MH, Wheeler D. 2015. Analysis of environmental chemical mixtures and non-Hodgkin lymphoma risk in the NCI-SEER NHL study. Environmental Health Perspectives.

Czarnota J, Gennings C, Wheeler D. 2015. Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk. Cancer Informatics, 2015:14(S2) 159-171.

Curtin P, Kellogg J, Cech N, and Gennings C. A random subset implementation of weighted quantile sum (wqsrs) regression for analysis of high-dimensional mixtures. Communications in Statistics - Simulation and Computation, 0(0):1–16, 2019. doi: 10.1080/03610918.2019.1577971.

Tanner EM, Bornehag CG, and Gennings C. Repeated holdout validation for weighted quantile sum regression. MethodsX, 6:2855 – 2860, 2019. doi: https://doi.org/10.1016/j.mex.2019.11.008.

Renzetti S, Gennings C and Calza S (2023) A weighted quantile sum regression with penalized weights and two indices. Front Public Health 11:1151821. doi: 10.3389/fpubh.2023.1151821.

4 Acknowledgements

This package was developed at the CHEAR Data Center (Dept. of Environmental Medicine and Public Health, Icahn School of Medicine at Mount Sinai) with funding and support from NIEHS (U2C ES026555-01) with additional support from the Empire State Development Corporation.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.