The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Introduction

It is a challenging task to model the emerging high-dimensional clinical data with survival outcomes. For its simplicity and efficiency, penalized Cox models are significantly useful for accomplishing such tasks.

hdnom streamlines the workflow of high-dimensional Cox model building, nomogram plotting, model validation, calibration, and comparison.

Build survival models

To build a penalized Cox model with good predictive performance, some parameter tuning is usually needed. For example, the elastic-net model requires to tune the \(\ell_1\)-\(\ell_2\) penalty trade-off parameter \(\alpha\), and the regularization parameter \(\lambda\).

To free the users from the tedious and error-prone parameter tuning process, hdnom provides several functions for automatic parameter tuning and model selection, including the following model types:

Function name Model type Auto-tuned hyperparameters
fit_lasso() Lasso \(\lambda\)
fit_alasso() Adaptive lasso \(\lambda\)
fit_enet() Elastic-net \(\lambda\), \(\alpha\)
fit_aenet() Adaptive elastic-net \(\lambda\), \(\alpha\)
fit_mcp() MCP \(\gamma\), \(\lambda\)
fit_mnet() Mnet (MCP + \(\ell_2\)) \(\gamma\), \(\lambda\), \(\alpha\)
fit_scad() SCAD \(\gamma\), \(\lambda\)
fit_snet() Snet (SCAD + \(\ell_2\)) \(\gamma\), \(\lambda\), \(\alpha\)
fit_flasso() Fused lasso \(\lambda_1\), \(\lambda_2\)

In the next, we will use the imputed SMART study data to demonstrate a complete process of model building, nomogram plotting, model validation, calibration, and comparison with hdnom.

Load the packages and the smart dataset:

library("hdnom")
data("smart")
x <- as.matrix(smart[, -c(1, 2)])
time <- smart$TEVENT
event <- smart$EVENT
y <- survival::Surv(time, event)

The dataset contains 3873 observations with corresponding survival outcome (time, event). 27 clinical variables (x) are available as the predictors. See ?smart for a detailed explanation of the variables.

Fit a penalized Cox model by adaptive elastic-net regularization with fit_aenet() and enable the parallel parameter tuning:

suppressMessages(library("doParallel"))
registerDoParallel(detectCores())

fit <- fit_aenet(x, y, nfolds = 10, rule = "lambda.1se", seed = c(5, 7), parallel = TRUE)
names(fit)
## [1] "model"       "alpha"       "lambda"      "model_init"  "alpha_init" 
## [6] "lambda_init" "pen_factor"  "type"        "seed"        "call"  

Adaptive elastic-net includes two estimation steps. The random seed used for parameter tuning, the selected best \(\alpha\), the selected best \(\lambda\), the model fitted for each estimation step, and the penalty factor for the model coefficients in the second estimation step are all stored in the model object fit.

Nomogram visualization

Before plotting the nomogram, we need to extract some necessary information about the model: the model object and the selected hyperparameters:

model <- fit$model
alpha <- fit$alpha
lambda <- fit$lambda
adapen <- fit$pen_factor

Let’s generate a nomogram object with as_nomogram() and plot it:

nom <- as_nomogram(
  fit, x, time, event,
  pred.at = 365 * 2,
  funlabel = "2-Year Overall Survival Probability"
)

plot(nom)

According to the nomogram, the adaptive elastic-net model selected 6 variables from the original set of 27 variables, effectively reduced the model complexity.

Information about the nomogram itself, such as the point-linear predictor unit mapping and total points-survival probability mapping, can be viewed by printing the nom object directly.

Model validation

It is a common practice to utilize resampling-based methods to validate the predictive performance of a penalized Cox model. Bootstrap, \(k\)-fold cross-validation, and repeated \(k\)-fold cross-validation are the most employed methods for such purpose.

hdnom supports both internal model validation and external model validation. Internal validation takes the dataset used to build the model and evaluates the predictive performance on the data internally with the above resampling-based methods, while external validation evaluates the model’s predictive performance on a dataset which is independent to the dataset used in model building.

Internal validation

validate() allows us to assess the model performance internally by time-dependent AUC (Area Under the ROC Curve) with the above three resampling methods.

Here, we validate the performance of the adaptive elastic-net model with bootstrap resampling, at every half year from the first year to the fifth year:

val_int <- validate(
  x, time, event,
  model.type = "aenet",
  alpha = alpha, lambda = lambda, pen.factor = adapen,
  method = "bootstrap", boot.times = 10,
  tauc.type = "UNO", tauc.time = seq(1, 5, 0.5) * 365,
  seed = 42, trace = FALSE
)

print(val_int)
#> High-Dimensional Cox Model Validation Object
#> Random seed: 42 
#> Validation method: bootstrap
#> Bootstrap samples: 10 
#> Model type: aenet 
#> glmnet model alpha: 0.15 
#> glmnet model lambda: 0.4322461 
#> glmnet model penalty factor: specified
#> Time-dependent AUC type: UNO 
#> Evaluation time points for tAUC: 365 547.5 730 912.5 1095 1277.5 1460 1642.5 1825
summary(val_int)
#> Time-Dependent AUC Summary at Evaluation Time Points
#>                365     547.5       730     912.5      1095    1277.5      1460
#> Mean     0.6736908 0.6980663 0.6883938 0.6877201 0.7171392 0.7329408 0.6801665
#> Min      0.6702972 0.6943275 0.6852397 0.6838341 0.7104875 0.7275005 0.6713858
#> 0.25 Qt. 0.6726018 0.6964805 0.6873354 0.6851884 0.7140499 0.7293569 0.6776290
#> Median   0.6740445 0.6983919 0.6879848 0.6884829 0.7163701 0.7321725 0.6799320
#> 0.75 Qt. 0.6749241 0.6996926 0.6896133 0.6896262 0.7210469 0.7371302 0.6825069
#> Max      0.6760826 0.7029078 0.6916850 0.6916987 0.7236970 0.7389818 0.6919914
#>             1642.5      1825
#> Mean     0.6841580 0.6935303
#> Min      0.6770945 0.6800316
#> 0.25 Qt. 0.6821133 0.6924729
#> Median   0.6831368 0.6956285
#> 0.75 Qt. 0.6864527 0.6966638
#> Max      0.6939574 0.6997908

The mean, median, 25%, and 75% quantiles of time-dependent AUC at each time point across all bootstrap predictions are listed above. The median and the mean can be considered as the bias-corrected estimation of the model performance.

It is also possible to plot the model validation result:

plot(val_int)
#>                365     547.5       730     912.5      1095    1277.5      1460
#> Mean     0.6736908 0.6980663 0.6883938 0.6877201 0.7171392 0.7329408 0.6801665
#> Min      0.6702972 0.6943275 0.6852397 0.6838341 0.7104875 0.7275005 0.6713858
#> 0.25 Qt. 0.6726018 0.6964805 0.6873354 0.6851884 0.7140499 0.7293569 0.6776290
#> Median   0.6740445 0.6983919 0.6879848 0.6884829 0.7163701 0.7321725 0.6799320
#> 0.75 Qt. 0.6749241 0.6996926 0.6896133 0.6896262 0.7210469 0.7371302 0.6825069
#> Max      0.6760826 0.7029078 0.6916850 0.6916987 0.7236970 0.7389818 0.6919914
#>             1642.5      1825
#> Mean     0.6841580 0.6935303
#> Min      0.6770945 0.6800316
#> 0.25 Qt. 0.6821133 0.6924729
#> Median   0.6831368 0.6956285
#> 0.75 Qt. 0.6864527 0.6966638
#> Max      0.6939574 0.6997908

The solid line represents the mean of the AUC, the dashed line represents the median of the AUC. The darker interval in the plot shows the 25% and 75% quantiles of AUC, the lighter interval shows the minimum and maximum of AUC.

It seems that the bootstrap-based validation result is stable: the median and the mean value at each evaluation time point are close; the 25% and 75% quantiles are also close to the median at each time point.

Bootstrap-based validation often gives relatively stable results. Many of the established nomograms in clinical oncology research are validated by bootstrap methods. \(K\)-fold cross-validation provides a more strict evaluation scheme than bootstrap. Repeated cross-validation gives similar results as \(k\)-fold cross-validation, and usually more robust. These two methods are more applied by the machine learning community. Check ?hdnom::validate for more examples about internal model validation.

External validation

Now we have the internally validated model. To perform external validation, we usually need an independent dataset (preferably, collected in other studies), which has the same variables as the dataset used to build the model. For penalized Cox models, the external dataset should have at least the same variables that have been selected in the model.

For demonstration purposes, here we draw 1000 samples from the smart data and assume that they form an external validation dataset, then use validate_external() to perform external validation:

x_new <- as.matrix(smart[, -c(1, 2)])[1001:2000, ]
time_new <- smart$TEVENT[1001:2000]
event_new <- smart$EVENT[1001:2000]

val_ext <- validate_external(
  fit, x, time, event,
  x_new, time_new, event_new,
  tauc.type = "UNO",
  tauc.time = seq(0.25, 2, 0.25) * 365
)

print(val_ext)
#> High-Dimensional Cox Model External Validation Object
#> Model type: aenet 
#> Time-dependent AUC type: UNO 
#> Evaluation time points for tAUC: 91.25 182.5 273.75 365 456.25 547.5 638.75 730
summary(val_ext)
#> Time-Dependent AUC Summary at Evaluation Time Points
#>         91.25     182.5    273.75       365    456.25     547.5   638.75
#> AUC 0.4328909 0.5713055 0.6371661 0.6351403 0.6575692 0.6768453 0.683239
#>           730
#> AUC 0.6956754
plot(val_ext)
#>         91.25     182.5    273.75       365    456.25     547.5   638.75
#> AUC 0.4328909 0.5713055 0.6371661 0.6351403 0.6575692 0.6768453 0.683239
#>           730
#> AUC 0.6956754

The time-dependent AUC on the external dataset is shown above.

Model calibration

Measuring how far the model predictions are from actual survival outcomes is known as calibration. Calibration can be assessed by plotting the predicted probabilities from the model versus actual survival probabilities. Similar to model validation, both internal model calibration and external model calibration are supported in hdnom.

Internal calibration

calibrate() provides non-resampling and resampling methods for internal model calibration, including direct fitting, bootstrap resampling, \(k\)-fold cross-validation, and repeated cross-validation.

For example, to calibrate the model internally with the bootstrap method:

cal_int <- calibrate(
  x, time, event,
  model.type = "aenet",
  alpha = alpha, lambda = lambda, pen.factor = adapen,
  method = "bootstrap", boot.times = 10,
  pred.at = 365 * 5, ngroup = 3,
  seed = 42, trace = FALSE
)

print(cal_int)
#> High-Dimensional Cox Model Calibration Object
#> Random seed: 42 
#> Calibration method: bootstrap
#> Bootstrap samples: 10 
#> Model type: aenet 
#> glmnet model alpha: 0.15 
#> glmnet model lambda: 0.4322461 
#> glmnet model penalty factor: specified
#> Calibration time point: 1825 
#> Number of groups formed for calibration: 3
summary(cal_int)
#>   Calibration Summary Table
#>   Predicted  Observed Lower 95% Upper 95%
#> 1 0.7937694 0.7556172 0.7275288 0.7847901
#> 2 0.8940039 0.8956157 0.8744876 0.9172544
#> 3 0.9376169 0.9424155 0.9256913 0.9594419

We split the samples into three risk groups. In practice, the number of risk groups is decided by the users according to their needs.

The model calibration results (the median of the predicted survival probability; the median of the observed survival probability estimated by Kaplan-Meier method with 95% CI) are summarized as above.

Plot the calibration result:

plot(cal_int, xlim = c(0.5, 1), ylim = c(0.5, 1))

In practice, you may want to perform calibration for multiple time points separately, and put the plots together in one figure. See ?hdnom::calibrate for more examples about internal model calibration.

External calibration

To perform external calibration with an external dataset, use calibrate_external():

cal_ext <- calibrate_external(
  fit, x, time, event,
  x_new, time_new, event_new,
  pred.at = 365 * 5, ngroup = 3
)

print(cal_ext)
#> High-Dimensional Cox Model External Calibration Object
#> Model type: aenet 
#> Calibration time point: 1825 
#> Number of groups formed for calibration: 3
summary(cal_ext)
#>   External Calibration Summary Table
#>   Predicted  Observed Lower 95% Upper 95%
#> 1 0.7937879 0.7471369 0.6991861 0.7983762
#> 2 0.8917521 0.8727998 0.8363680 0.9108185
#> 3 0.9214463 0.9387588 0.9122184 0.9660715
plot(cal_ext, xlim = c(0.5, 1), ylim = c(0.5, 1))

The external calibration results have the similar interpretations as the internal calibration results, except the fact that external calibration is performed on the external dataset.

Kaplan-Meier analysis for risk groups

Internal calibration and external calibration both classify the testing set into different risk groups. For internal calibration, the testing set means all the samples in the dataset that was used to build the model, for external calibration, the testing set means the samples from the external dataset.

We can further analyze the differences in survival time for different risk groups with Kaplan-Meier survival curves and a number at risk table. For example, here we plot the Kaplan-Meier survival curves and evaluate the number at risk from one year to six years for the three risk groups, with the function kmplot():

kmplot(
  cal_int,
  group.name = c("High risk", "Medium risk", "Low risk"),
  time.at = 1:6 * 365
)


kmplot(
  cal_ext,
  group.name = c("High risk", "Medium risk", "Low risk"),
  time.at = 1:6 * 365
)

The \(p\)-value of the log-rank test is also shown in the plot.

Log-rank test for risk groups

To compare the differences between the survival curves, log-rank test is often applied. logrank_test() performs such tests on the internal calibration and external calibration results:

cal_int_logrank <- logrank_test(cal_int)
cal_int_logrank
#> Call:
#> survdiff(formula = formula("Surv(time, event) ~ grp"))
#> 
#> n=3872, 1 observation deleted due to missingness.
#> 
#>          N Observed Expected (O-E)^2/E (O-E)^2/V
#> grp=1 1290      291      156     117.9     178.5
#> grp=2 1291      112      157      12.7      19.4
#> grp=3 1291       56      147      56.1      82.7
#> 
#>  Chisq= 187  on 2 degrees of freedom, p= <2e-16
cal_int_logrank$pval
#> [1] 2.532113e-41

cal_ext_logrank <- logrank_test(cal_ext)
cal_ext_logrank
#> Call:
#> survdiff(formula = formula("Surv(time, event) ~ grp"))
#> 
#> n=999, 1 observation deleted due to missingness.
#> 
#>         N Observed Expected (O-E)^2/E (O-E)^2/V
#> grp=1 333       83     45.0     32.14      46.5
#> grp=2 333       40     49.8      1.92       2.9
#> grp=3 333       24     52.3     15.28      23.7
#> 
#>  Chisq= 49.5  on 2 degrees of freedom, p= 2e-11
cal_ext_logrank$pval
#> [1] 1.761037e-11

The exact \(p\)-values for log-rank tests are stored as cal_int_logrank$pval and cal_ext_logrank$pval. Here \(p < 0.001\) indicates significant differences between the survival curves for different risk groups.

Model comparison

Given all the available model types, it is a natural question to ask: which type of model performs the best for my data? Such questions about model type selection can be answered by built-in model comparison functions in hdnom.

Model comparison by validation

We can compare the model performance using time-dependent AUC by the same (internal) model validation approach as before. For example, here we compare lasso and adaptive lasso by 5-fold cross-validation:

cmp_val <- compare_by_validate(
  x, time, event,
  model.type = c("lasso", "alasso"),
  method = "cv", nfolds = 5, tauc.type = "UNO",
  tauc.time = seq(0.25, 2, 0.25) * 365,
  seed = 42, trace = FALSE
)

print(cmp_val)
#> High-Dimensional Cox Model Validation Object
#> Random seed: 42 
#> Validation method: k-fold cross-validation
#> Cross-validation folds: 5 
#> Model type: lasso 
#> glmnet model alpha: 1 
#> glmnet model lambda: 0.01117274 
#> glmnet model penalty factor: not specified
#> Time-dependent AUC type: UNO 
#> Evaluation time points for tAUC: 91.25 182.5 273.75 365 456.25 547.5 638.75 730
#> 
#> High-Dimensional Cox Model Validation Object
#> Random seed: 42 
#> Validation method: k-fold cross-validation
#> Cross-validation folds: 5 
#> Model type: alasso 
#> glmnet model alpha: 1 
#> glmnet model lambda: 0.04226386 
#> glmnet model penalty factor: specified
#> Time-dependent AUC type: UNO 
#> Evaluation time points for tAUC: 91.25 182.5 273.75 365 456.25 547.5 638.75 730
summary(cmp_val)
#> Model type: lasso 
#>              91.25     182.5    273.75       365    456.25     547.5    638.75
#> Mean     0.4862766 0.6235047 0.6554862 0.6645661 0.6709065 0.6905955 0.6859931
#> Min      0.2039134 0.5628978 0.6125746 0.6259622 0.6288554 0.6480062 0.6258751
#> 0.25 Qt. 0.4868413 0.6095040 0.6296803 0.6518906 0.6636125 0.6740643 0.6677914
#> Median   0.5470612 0.6207525 0.6669674 0.6753767 0.6780298 0.6956816 0.7034205
#> 0.75 Qt. 0.5840546 0.6536918 0.6724515 0.6765498 0.6818180 0.6959422 0.7126816
#> Max      0.6095124 0.6706775 0.6957573 0.6930510 0.7022166 0.7392832 0.7201967
#>                730
#> Mean     0.6789647
#> Min      0.6251706
#> 0.25 Qt. 0.6258098
#> Median   0.6935016
#> 0.75 Qt. 0.7180132
#> Max      0.7323283
#>              91.25     182.5    273.75       365    456.25     547.5    638.75
#> Mean     0.4862766 0.6235047 0.6554862 0.6645661 0.6709065 0.6905955 0.6859931
#> Min      0.2039134 0.5628978 0.6125746 0.6259622 0.6288554 0.6480062 0.6258751
#> 0.25 Qt. 0.4868413 0.6095040 0.6296803 0.6518906 0.6636125 0.6740643 0.6677914
#> Median   0.5470612 0.6207525 0.6669674 0.6753767 0.6780298 0.6956816 0.7034205
#> 0.75 Qt. 0.5840546 0.6536918 0.6724515 0.6765498 0.6818180 0.6959422 0.7126816
#> Max      0.6095124 0.6706775 0.6957573 0.6930510 0.7022166 0.7392832 0.7201967
#>                730
#> Mean     0.6789647
#> Min      0.6251706
#> 0.25 Qt. 0.6258098
#> Median   0.6935016
#> 0.75 Qt. 0.7180132
#> Max      0.7323283
#> 
#> Model type: alasso 
#>              91.25     182.5    273.75       365    456.25     547.5    638.75
#> Mean     0.4814586 0.6106500 0.6475280 0.6612647 0.6675871 0.6834418 0.6813224
#> Min      0.2479919 0.5538453 0.6009056 0.6208453 0.6310650 0.6504086 0.6354844
#> 0.25 Qt. 0.4747967 0.5918396 0.6394882 0.6539436 0.6509640 0.6548892 0.6608927
#> Median   0.5320552 0.6000809 0.6534756 0.6651832 0.6601551 0.6643520 0.6634682
#> 0.75 Qt. 0.5755143 0.6290886 0.6546811 0.6665424 0.6841356 0.6999959 0.7108894
#> Max      0.5769350 0.6783959 0.6890896 0.6998091 0.7116160 0.7475634 0.7358773
#>                730
#> Mean     0.6774869
#> Min      0.6340813
#> 0.25 Qt. 0.6386258
#> Median   0.6791506
#> 0.75 Qt. 0.6995719
#> Max      0.7360047
#>              91.25     182.5    273.75       365    456.25     547.5    638.75
#> Mean     0.4814586 0.6106500 0.6475280 0.6612647 0.6675871 0.6834418 0.6813224
#> Min      0.2479919 0.5538453 0.6009056 0.6208453 0.6310650 0.6504086 0.6354844
#> 0.25 Qt. 0.4747967 0.5918396 0.6394882 0.6539436 0.6509640 0.6548892 0.6608927
#> Median   0.5320552 0.6000809 0.6534756 0.6651832 0.6601551 0.6643520 0.6634682
#> 0.75 Qt. 0.5755143 0.6290886 0.6546811 0.6665424 0.6841356 0.6999959 0.7108894
#> Max      0.5769350 0.6783959 0.6890896 0.6998091 0.7116160 0.7475634 0.7358773
#>                730
#> Mean     0.6774869
#> Min      0.6340813
#> 0.25 Qt. 0.6386258
#> Median   0.6791506
#> 0.75 Qt. 0.6995719
#> Max      0.7360047
plot(cmp_val)
#>              91.25     182.5    273.75       365    456.25     547.5    638.75
#> Mean     0.4862766 0.6235047 0.6554862 0.6645661 0.6709065 0.6905955 0.6859931
#> Min      0.2039134 0.5628978 0.6125746 0.6259622 0.6288554 0.6480062 0.6258751
#> 0.25 Qt. 0.4868413 0.6095040 0.6296803 0.6518906 0.6636125 0.6740643 0.6677914
#> Median   0.5470612 0.6207525 0.6669674 0.6753767 0.6780298 0.6956816 0.7034205
#> 0.75 Qt. 0.5840546 0.6536918 0.6724515 0.6765498 0.6818180 0.6959422 0.7126816
#> Max      0.6095124 0.6706775 0.6957573 0.6930510 0.7022166 0.7392832 0.7201967
#>                730
#> Mean     0.6789647
#> Min      0.6251706
#> 0.25 Qt. 0.6258098
#> Median   0.6935016
#> 0.75 Qt. 0.7180132
#> Max      0.7323283
#>              91.25     182.5    273.75       365    456.25     547.5    638.75
#> Mean     0.4814586 0.6106500 0.6475280 0.6612647 0.6675871 0.6834418 0.6813224
#> Min      0.2479919 0.5538453 0.6009056 0.6208453 0.6310650 0.6504086 0.6354844
#> 0.25 Qt. 0.4747967 0.5918396 0.6394882 0.6539436 0.6509640 0.6548892 0.6608927
#> Median   0.5320552 0.6000809 0.6534756 0.6651832 0.6601551 0.6643520 0.6634682
#> 0.75 Qt. 0.5755143 0.6290886 0.6546811 0.6665424 0.6841356 0.6999959 0.7108894
#> Max      0.5769350 0.6783959 0.6890896 0.6998091 0.7116160 0.7475634 0.7358773
#>                730
#> Mean     0.6774869
#> Min      0.6340813
#> 0.25 Qt. 0.6386258
#> Median   0.6791506
#> 0.75 Qt. 0.6995719
#> Max      0.7360047

plot(cmp_val, interval = TRUE)
#>              91.25     182.5    273.75       365    456.25     547.5    638.75
#> Mean     0.4862766 0.6235047 0.6554862 0.6645661 0.6709065 0.6905955 0.6859931
#> Min      0.2039134 0.5628978 0.6125746 0.6259622 0.6288554 0.6480062 0.6258751
#> 0.25 Qt. 0.4868413 0.6095040 0.6296803 0.6518906 0.6636125 0.6740643 0.6677914
#> Median   0.5470612 0.6207525 0.6669674 0.6753767 0.6780298 0.6956816 0.7034205
#> 0.75 Qt. 0.5840546 0.6536918 0.6724515 0.6765498 0.6818180 0.6959422 0.7126816
#> Max      0.6095124 0.6706775 0.6957573 0.6930510 0.7022166 0.7392832 0.7201967
#>                730
#> Mean     0.6789647
#> Min      0.6251706
#> 0.25 Qt. 0.6258098
#> Median   0.6935016
#> 0.75 Qt. 0.7180132
#> Max      0.7323283
#>              91.25     182.5    273.75       365    456.25     547.5    638.75
#> Mean     0.4814586 0.6106500 0.6475280 0.6612647 0.6675871 0.6834418 0.6813224
#> Min      0.2479919 0.5538453 0.6009056 0.6208453 0.6310650 0.6504086 0.6354844
#> 0.25 Qt. 0.4747967 0.5918396 0.6394882 0.6539436 0.6509640 0.6548892 0.6608927
#> Median   0.5320552 0.6000809 0.6534756 0.6651832 0.6601551 0.6643520 0.6634682
#> 0.75 Qt. 0.5755143 0.6290886 0.6546811 0.6665424 0.6841356 0.6999959 0.7108894
#> Max      0.5769350 0.6783959 0.6890896 0.6998091 0.7116160 0.7475634 0.7358773
#>                730
#> Mean     0.6774869
#> Min      0.6340813
#> 0.25 Qt. 0.6386258
#> Median   0.6791506
#> 0.75 Qt. 0.6995719
#> Max      0.7360047

The solid line, dashed line and intervals have the same interpretation as above. For this comparison, there seems to be no substantial difference (AUC difference \(< 5\%\)) between lasso and adaptive lasso in predictive performance, although lasso performs slightly better than adaptive lasso for the first three time points, adaptive lasso performs slightly better than lasso for the last few time points.

The model comparison functions in hdnom have a minimal input design so you do not have to set the parameters for each model type manually. The functions will try to determine the best parameter settings automatically for each model type to achieve the best performance.

Model comparison by calibration

We can compare the models by comparing their (internal) model calibration performance. To continue the example, we split the samples into five risk groups, and compare lasso to adaptive lasso via calibration:

cmp_cal <- compare_by_calibrate(
  x, time, event,
  model.type = c("lasso", "alasso"),
  method = "cv", nfolds = 5,
  pred.at = 365 * 9, ngroup = 5,
  seed = 42, trace = FALSE
)

print(cmp_cal)
#> High-Dimensional Cox Model Calibration Object
#> Random seed: 42 
#> Calibration method: k-fold cross-validation
#> Cross-validation folds: 5 
#> Model type: lasso 
#> glmnet model alpha: 1 
#> glmnet model lambda: 0.01117274 
#> glmnet model penalty factor: not specified
#> Calibration time point: 3285 
#> Number of groups formed for calibration: 5 
#> 
#> High-Dimensional Cox Model Calibration Object
#> Random seed: 42 
#> Calibration method: k-fold cross-validation
#> Cross-validation folds: 5 
#> Model type: alasso 
#> glmnet model alpha: 1 
#> glmnet model lambda: 0.04226386 
#> glmnet model penalty factor: specified
#> Calibration time point: 3285 
#> Number of groups formed for calibration: 5
summary(cmp_cal)
#>   Model type: lasso 
#>   Calibration Summary Table
#>   Predicted  Observed Lower 95% Upper 95%
#> 1 0.5697146 0.4934145 0.4240717 0.5740960
#> 2 0.7049550 0.6934715 0.5838120 0.8237287
#> 3 0.7667393 0.7894857 0.7203442 0.8652637
#> 4 0.8124527 0.8658548 0.7886324 0.9506387
#> 5 0.8571816 0.8858418 0.8476030 0.9258058
#> 
#>   Model type: alasso 
#>   Calibration Summary Table
#>   Predicted  Observed Lower 95% Upper 95%
#> 1 0.5477459 0.4881734 0.4120040 0.5784246
#> 2 0.7193159 0.7164990 0.6459081 0.7948047
#> 3 0.7928640 0.8172891 0.7657546 0.8722919
#> 4 0.8368744 0.8430378 0.7511746 0.9461351
#> 5 0.8786381 0.8926692 0.8524805 0.9347526
plot(cmp_cal, xlim = c(0.3, 1), ylim = c(0.3, 1))

The summary output and the plot show the calibration results for each model type we want to compare. Lasso and adaptive lasso have comparable performance in this case, since their predicted overall survival probabilities are both close to the observed survival probabilities in a similar degree. Adaptive lasso seems to be slightly more stable than lasso in calibration.

Prediction on new data

To predict the overall survival probability on certain time points for new samples with the established models, simply use predict() on the model objects and the new data.

As an example, we will use the samples numbered from 101 to 105 in the smart dataset as the new samples, and predict their overall survival probability from one year to ten years:

predict(fit, x, y, newx = x[101:105, ], pred.at = 1:10 * 365)
#>            365       730      1095      1460      1825      2190      2555
#> [1,] 0.9477525 0.9204206 0.8884055 0.8573295 0.8171765 0.7840601 0.7427595
#> [2,] 0.9715008 0.9563035 0.9382349 0.9204068 0.8969238 0.8771525 0.8519471
#> [3,] 0.9786512 0.9672021 0.9535296 0.9399734 0.9220153 0.9068029 0.8872827
#> [4,] 0.8973088 0.8458249 0.7874714 0.7328425 0.6651912 0.6118826 0.5485474
#> [5,] 0.9736465 0.9595694 0.9428104 0.9262503 0.9044000 0.8859696 0.8624274
#>           2920      3285      3650
#> [1,] 0.7043892 0.6542044 0.6542044
#> [2,] 0.8279442 0.7956203 0.7956203
#> [3,] 0.8685571 0.8431207 0.8431207
#> [4,] 0.4928337 0.4245071 0.4245071
#> [5,] 0.8399589 0.8096226 0.8096226

Customize color palette

The hdnom package has 4 unique built-in color palettes available for all above plots, inspired by the colors commonly used by scientific journals. Users can use the col.pal argument to select the color palette. Possible values for this argument are listed in the table below:

Value Color palette inspiration
"JCO" Journal of Clinical Oncology
"Lancet" Lancet journals, such as Lancet Oncology
"NPG" NPG journals, such as Nature Reviews Cancer
"AAAS" AAAS Journals, such as Science

By default, hdnom will use the JCO color palette (col.pal = "JCO").

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.