The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
library(Colossus)
library(data.table)
Colossus supports two methods of calculating a confidence interval for model parameters for Cox proportional hazards models. The Wald method and a likelihood-based bound method. This vignette will be focused on the differences and what issues may arise.
When Colossus finishes a Cox regression, it returns both parameter estimates and standard errors. The simplest form of confidence interval is assuming a normal distribution with the parameter mean and deviation. The standard errors are calculated from the covariance matrix, which is calculated as the negative inverse of the information matrix (\(I\)) which is equal to the second derivative of Log-Likelihood.
\[\begin{gather} I \approx \frac{\partial^2 LL}{\partial \vec{\beta}^2} \end{gather}\]
For every row (\(i\)) if there is an event (\(\delta_i=1\)) the second derivative is a function of the relative risk and derivatives at the event row (\(A, B\)) and the sum of relative risks and derivatives for every row at risk (\(C, D, E\)) based on the rows in each risk group (\(l \in R_i\)).
\[\begin{gather} \frac{\partial^2 LL}{\partial \vec{\beta}_j \partial \vec{\beta}_k} = \sum_{i=0}^N \left( \delta_i \left( (A_{i,j,k} - B_{i,j}*B_{i,k} ) - (C_{i,j,k} - D_{i,j}*D_{i,k} ) \right)\right)\\ A_{i,j,k} = \frac{\frac{\partial^2 r_i}{\partial \vec{\beta}_j \partial \vec{\beta}_k}}{r_i}, B_{i,j} = \frac{\frac{\partial r_i}{\partial \vec{\beta}_j}}{r_i}\\ C_{i,j,k} = \frac{\sum_{l \in R_i} \frac{\partial^2 r_l}{\partial \vec{\beta}_j \partial \vec{\beta}_k}}{E_i}, D_{i,j} = \frac{\sum_{l \in R_i} \frac{\partial r_l}{\partial \vec{\beta}_j}}{E_i}, E_{i} = \sum_{l \in R_i} r_l \end{gather}\]
This gives a symmetric parameter confidence interval which is often accurate for single term log-linear models. However this approximation is often inaccurate for models with linear effects, particularly if the confidence interval includes negative values. Because the Wald method approximates the likelihood confidence interval, there is no guarantee that the model is defined over the interval.
The more exact solution is to directly solve for the boundary. The basic premise is that the model can be optimized with a parameter (\(\beta\)) fixed. Each value of \(\beta\) has a corresponding maximum log-likelihood. The confidence interval is the range of values of \(\beta\) such that the maximum log-likelihood is above a threshold, taken from the asymptotic \(\chi^2\) distribution of the generalized likelihood ratio test. Colossus uses the Venzon-Moolgavkar algorithm to iteratively solve for the interval endpoints.
There is one main issues that can arise from this method. The algorithm uses a Newton-Raphson algorithm, which may solve for local solutions instead of the global solution. Similar to the general Colossus regressions, limitations can be placed on step size to limit these effects. However, there is no analogue to selecting multiple starting locations. This is the basis for ongoing work to code in an alternative that can more directly solve for the true optimum.
This method directly solves for the confidence interval, which for linear cases may be non-symmetric or even not have upper or lower bounds. Linear models may be defined only above a parameter threshold. If the optimum value at that parameter threshold is above the \(\chi^2\) threshold value, then the interval would not have a lower bound.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.