Running PK models with nlmixr

2019-08-23

nlmixr

nlmixr

Running PK models with nlmixr

nlmixr uses a unified interface for specifying and running models. Let’s start with a very simple PK example, using the single-dose theophylline dataset generously provided by Dr. Robert A. Upton of the University of California, San Francisco:


## Load libraries
library(ggplot2)
library(nlmixr)
str(theo_sd)
#> 'data.frame':    144 obs. of  7 variables:
#>  $ ID  : int  1 1 1 1 1 1 1 1 1 1 ...
#>  $ TIME: num  0 0 0.25 0.57 1.12 2.02 3.82 5.1 7.03 9.05 ...
#>  $ DV  : num  0 0.74 2.84 6.57 10.5 9.66 8.58 8.36 7.47 6.89 ...
#>  $ AMT : num  320 0 0 0 0 ...
#>  $ EVID: int  101 0 0 0 0 0 0 0 0 0 ...
#>  $ CMT : int  1 2 2 2 2 2 2 2 2 2 ...
#>  $ WT  : num  79.6 79.6 79.6 79.6 79.6 79.6 79.6 79.6 79.6 79.6 ...

ggplot(theo_sd, aes(TIME, DV)) + geom_line(aes(group=ID), col="red") + scale_x_continuous("Time (h)") + scale_y_continuous("Concentration") + labs(title="Theophylline single-dose", subtitle="Concentration vs. time by individual")

We can try fitting a simple one-compartment PK model to this small dataset. We write the model as follows:

one.cmt <- function() {
    ini({
        ## You may label each parameter with a comment
        tka <- 0.45 # Log Ka
        tcl <- 1 # Log Cl
        ## This works with interactive models
        ## You may also label the preceding line with label("label text")
        tv <- 3.45; label("log V")
        ## the label("Label name") works with all models
        eta.ka ~ 0.6
        eta.cl ~ 0.3
        eta.v ~ 0.1
        add.err <- 0.7
    })
    model({
        ka <- exp(tka + eta.ka)
        cl <- exp(tcl + eta.cl)
        v <- exp(tv + eta.v)
        linCmt() ~ add(add.err)
    })
}

We can now run the model…

fit <- nlmixr(one.cmt, theo_sd, est="nlme")
#> 
#> **Iteration 1
#> LME step: Loglik: -183.2149, nlminb iterations: 1
#> reStruct  parameters:
#>       ID1       ID2       ID3 
#> 0.2198819 0.9924203 1.6504986 
#>  Beginning PNLS step: ..  completed fit_nlme() step.
#> PNLS step: RSS =  64.12713 
#>  fixed effects: 0.445205  1.01864  3.449068  
#>  iterations: 7 
#> Convergence crit. (must all become <= tolerance = 1e-05):
#>      fixed   reStruct 
#> 0.01829916 0.87183794 
#> 
#> **Iteration 2
#> LME step: Loglik: -179.7485, nlminb iterations: 7
#> reStruct  parameters:
#>       ID1       ID2       ID3 
#> 0.1098768 0.9674593 1.6403262 
#>  Beginning PNLS step: ..  completed fit_nlme() step.
#> PNLS step: RSS =  64.05998 
#>  fixed effects: 0.4465211  1.01853  3.449222  
#>  iterations: 7 
#> Convergence crit. (must all become <= tolerance = 1e-05):
#>       fixed    reStruct 
#> 0.002947356 0.043196770 
#> 
#> **Iteration 3
#> LME step: Loglik: -179.7363, nlminb iterations: 5
#> reStruct  parameters:
#>       ID1       ID2       ID3 
#> 0.1132783 0.9676319 1.6413587 
#>  Beginning PNLS step: ..  completed fit_nlme() step.
#> PNLS step: RSS =  64.06906 
#>  fixed effects: 0.4465211  1.01853  3.449222  
#>  iterations: 1 
#> Convergence crit. (must all become <= tolerance = 1e-05):
#>       fixed    reStruct 
#> 0.000000000 0.005870934 
#> 
#> **Iteration 4
#> LME step: Loglik: -179.7363, nlminb iterations: 1
#> reStruct  parameters:
#>       ID1       ID2       ID3 
#> 0.1132693 0.9676245 1.6413627 
#>  Beginning PNLS step: ..  completed fit_nlme() step.
#> PNLS step: RSS =  64.06906 
#>  fixed effects: 0.4465211  1.01853  3.449222  
#>  iterations: 1 
#> Convergence crit. (must all become <= tolerance = 1e-05):
#>        fixed     reStruct 
#> 0.000000e+00 2.138859e-10
#> Using sympy via reticulate
#> Load into sympy...done
#> Freeing Python/SymPy memory...done
#> ################################################################################
#> Optimizing expressions in Predictions/EBE model...done
#> Compiling Predictions/EBE model...done
#> Standardized prediction/ebe models produced.
#> Calculating residuals/tables
#> done.
#> Warning in (function (uif, data, est = NULL, control = list(), ...,
#> sum.prod = FALSE, : Initial condition for additive error ignored with nlme
print(fit)
#> ── nlmixr nlme by maximum likelihood (Solved; μ-ref & covs) nlme OBF fit ── 
#>          OBJF      AIC      BIC Log-likelihood Condition Number
#> nlme 116.8727 373.4725 393.6521      -179.7363         17.08747
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>          nlme    setup table    other
#> elapsed 7.095 10.01971 0.021 1.651293
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ──────────────────────
#> Registered S3 method overwritten by 'R.oo':
#>   method        from       
#>   throw.default R.methodsS3
#>         Parameter  Est.     SE %RSE Back-transformed(95%CI) BSV(CV%)
#> tka        Log Ka 0.447  0.192   43       1.56 (1.07, 2.28)     68.7
#> tcl        Log Cl  1.02 0.0847 8.31       2.77 (2.35, 3.27)     26.9
#> tv          log V  3.45 0.0464 1.35       31.5 (28.7, 34.5)     13.6
#> add.err           0.697                               0.697         
#>         Shrink(SD)%
#> tka         0.241% 
#> tcl          3.78% 
#> tv           10.0% 
#> add.err             
#> 
#>   Covariance Type ($covMethod): nlme
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 18
#>   ID     TIME    DV  EVID  PRED    RES IPRED   IRES  IWRES eta.ka eta.cl
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl> <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
#> 1 1     0      0.74     0  0     0.74   0     0.74   1.06   0.101 -0.479
#> 2 1     0.25   2.84     0  3.25 -0.410  3.84 -1.00  -1.44   0.101 -0.479
#> 3 1     0.570  6.57     0  5.83  0.744  6.78 -0.212 -0.305  0.101 -0.479
#> # … with 129 more rows, and 7 more variables: eta.v <dbl>, rx1c <dbl>,
#> #   ka <dbl>, cl <dbl>, v <dbl>, depot <dbl>, central <dbl>

We can alternatively express the same model by ordinary differential equations (ODEs):

one.compartment <- function() {
    ini({
        tka <- 0.45 # Log Ka
        tcl <- 1 # Log Cl
        tv <- 3.45    # Log V
        eta.ka ~ 0.6
        eta.cl ~ 0.3
        eta.v ~ 0.1
        add.err <- 0.7
    })
    model({
        ka <- exp(tka + eta.ka)
        cl <- exp(tcl + eta.cl)
        v <- exp(tv + eta.v)
        d/dt(depot) = -ka * depot
        d/dt(center) = ka * depot - cl / v * center
        cp = center / v
        cp ~ add(add.err)
    })
}

We can try the Stochastic Approximation EM (SAEM) method to this model:

fit <- nlmixr(one.compartment, theo_sd, est="saem")
#> Loading model already run (/tmp/RtmpRip6T8/Rinst384d532043a0/nlmixr/nlmixr-one.compartment-theo_sd-saem-34940a3d459170d6b6c7285d181df2ce.rds)

And if we wanted to, we could even apply the First-Order Conditional Estimation (FOCEi) method to this model:

fitF <- nlmixr(one.compartment, theo_sd, est="focei")
#> Loading model already run (/tmp/RtmpRip6T8/Rinst384d532043a0/nlmixr/nlmixr-one.compartment-theo_sd-focei-c80161900a3a3ae78b92bdefa30d49d6.rds)

This example delivers a complete model fit as the fit object, including parameter history, a set of fixed effect estimates, and random effects for all included subjects.

Now back to the saem fit; Let’s look at the fit using nlmixr’s built-in diagnostics…

plot(fit)

print(fit)
#> ── nlmixr SAEM(ODE); OBJF by Gaussian Quadrature (n.nodes=3, n.sd=1.6) fit  
#>                OBJF      AIC      BIC Log-likelihood Condition Number
#> gauss3_1.6 122.9719 379.5717 399.7513      -182.7858         18.16221
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>           saem    setup table covariance logLik    other
#> elapsed 18.364 3.197184 0.007       0.01  0.024 0.376816
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>         Parameter  Est.     SE %RSE Back-transformed(95%CI) BSV(CV%)
#> tka        Log Ka 0.451  0.196 43.5       1.57 (1.07, 2.31)     71.9
#> tcl        Log Cl  1.02 0.0836 8.22       2.77 (2.35, 3.26)     27.0
#> tv          Log V  3.45 0.0469 1.36       31.5 (28.7, 34.5)     14.0
#> add.err           0.692                               0.692         
#>         Shrink(SD)%
#> tka         0.411% 
#> tcl          3.36% 
#> tv           10.0% 
#> add.err             
#> 
#>   Covariance Type ($covMethod): linFim
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 18
#>   ID     TIME    DV  EVID  PRED    RES IPRED   IRES  IWRES eta.ka eta.cl
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl> <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
#> 1 1     0      0.74     0  0     0.74   0     0.74   1.07   0.105 -0.487
#> 2 1     0.25   2.84     0  3.26 -0.423  3.86 -1.02  -1.48   0.105 -0.487
#> 3 1     0.570  6.57     0  5.84  0.725  6.81 -0.235 -0.340  0.105 -0.487
#> # … with 129 more rows, and 7 more variables: eta.v <dbl>, ka <dbl>,
#> #   cl <dbl>, v <dbl>, cp <dbl>, depot <dbl>, center <dbl>
fit$eta
ID eta.ka eta.cl eta.v
1 0.105  -0.487  -0.08   
2 0.221  0.144  0.0206 
3 0.368  0.0311 0.058  
4 -0.277  -0.015  -0.00723
5 -0.0458 -0.155  -0.142  
6 -0.382  0.367  0.203  
7 -0.791  0.16   0.0466 
8 -0.181  0.168  0.0958 
9 1.42   0.0423 0.0121 
10 -0.738  -0.391  -0.17   
11 0.79   0.281  0.146  
12 -0.527  -0.126  -0.198  

Default trace plots can be generated using:

traceplot(fit)

but with a little more work, we can get a nicer set of iteration trace plots (“wriggly worms”)…


iter <- fit$par.hist.stacked
iter$Parameter[iter$par=="add.err"] <- "Additive error"
iter$Parameter[iter$par=="eta.cl"]  <- "IIV CL/F"
iter$Parameter[iter$par=="eta.v"]   <- "IIV V/F"
iter$Parameter[iter$par=="eta.ka"]  <- "IIV ka"
iter$Parameter[iter$par=="tcl"]     <- "log(CL/F)"
iter$Parameter[iter$par=="tv"]      <- "log(V/F)"
iter$Parameter[iter$par=="tka"]     <- "log(ka)"
iter$Parameter <- ordered(iter$Parameter, c("log(CL/F)", "log(V/F)", "log(ka)",
                                            "IIV CL/F", "IIV V/F", "IIV ka",
                                            "Additive error"))

ggplot(iter, aes(iter, val)) +
  geom_line(col="red") + 
  scale_x_continuous("Iteration") +
  scale_y_continuous("Value") +
  facet_wrap(~ Parameter, scales="free_y") +
  labs(title="Theophylline single-dose", subtitle="Parameter estimation iterations")

… and some random-effects histograms…


etas <- data.frame(eta = c(fit$eta$eta.ka, fit$eta$eta.cl, fit$eta$eta.v),
                   lab = rep(c("eta(ka)", "eta(CL/F)", "eta(V/F)"), each=nrow(fit$eta)))
etas$lab <- ordered(etas$lab, c("eta(CL/F)","eta(V/F)","eta(ka)"))

ggplot(etas, aes(eta)) +
  geom_histogram(fill="red", col="white") + 
  geom_vline(xintercept=0) +
  scale_x_continuous(expression(paste(eta))) +
  scale_y_continuous("Count") +
  facet_grid(~ lab) +
  coord_cartesian(xlim=c(-1.75,1.75)) +
  labs(title="Theophylline single-dose", subtitle="IIV distributions")
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

xpose

This is all very nice. But what we really want is a complete suite of model diagnostic tools, like those available in xpose, right?

Restart R, and install xpose from CRAN, if you haven’t already…

Now install the extension for nlmixr:

 devtools::install_github("nlmixrdevelopment/xpose.nlmixr")

… and convert your nlmixr fit object into an xpose fit object.

library(xpose.nlmixr)
xp <- xpose_data_nlmixr(fit);
save(xp, file=xpdbLoc)

We can also replicate some of nlmixr’s internal plots…

#> Warning in grid.Call.graphics(C_lines, x$x, x$y, index, x$arrow): semi-
#> transparency is not supported on this device: reported only once per page

For more information about using xpose, see the Uppsala pharmacometrics group’s comprehensive site here.

The UI

The nlmixr modeling dialect, inspired by R and NONMEM, can be used to fit models using all current and future estimation alogorithms within nlmixr. Using these widely-used tools as inspiration has the advantage of delivering a model specification syntax taht is instantly familira to the majority of analysts working in pharmacometrics and related fields.

Overall model structure

Model specifications for nlmixr are written using functions containing ini and model blocks. These functions can be called anything, but must contain these two components. Let’s look at a very simple one-compartment model with no covariates.

The ini block

The ini block specifies initial conditions, including initial estimates and boundaries for those algorithms which support them (currently, the built-in nlme and saem methods do not). Nomenclature is similar to that used in NONMEM, Monolix and other similar packages. In the NONMEM world, the ini block is analogous to $THETA, $OMEGA and $SIGMA blocks.

As shown in the above example:

  • Simple parameter values are specified using an R-compatible assignment
  • Boundaries my be specified by c(lower, est, upper).
  • Like NONMEM, c(lower,est) is equivalent to c(lower,est,Inf)
  • Also like NONMEM, c(est) does not specify a lower bound, and is equivalent to specifying the parameter without using R’s c() function.

These parameters can be named using almost any R-compatible name. Please note that:

  • Residual error estimates should be coded as population estimates (i.e. using = or <-, not ~).
  • Variable names that start with _ are not supported. Note that R does not allow variable starting with _ to be assigned without quoting them.
  • Naming variables that start with rx_ or nlmixr_ is not allowed, since RxODE and nlmixr use these prefixes internally for certain estimation routines and for calculating residuals.
  • Variable names are case-sensitive, just like they are in R. CL is not the same as Cl.

In mixture models, multivariate normal individual deviations from the normal population and parameters are estimated (in NONMEM these are called “ETA” parameters). Additionally, the variance/covariance matrix of these deviations are is also estimated (in NONMEM this is the “OMEGA” matrix). These also take initial estimates. In nlmixr, these are specified by the ~ operator. This that is typically used in statistics R for “modeled by”, and was chosen to distinguish these estimates from the population and residual error parameters.

Continuing from the prior example, we can annotate the estimates for the between-subject error distribution…

As shown in the above example:

  • Simple variances are specified by the variable name and the estimate separated by ~.
  • Correlated parameters are specified by the sum of the variable labels and then the lower triangular matrix of the covariance is specified on the left handed side of the equation. This is also separated by ~.
  • The initial estimates are specified on the variance scale, and in analogy with NONMEM, the square roots of the diagonal elements correspond to coefficients of variation when used in the exponential IIV implementation.

Currently, comments inside the lower triangular matrix are not allowed.

The model block

The model block specifies the model, and is analogous to the $PK, $PRED and $ERROR blocks in NONMEM.

Once the initialization block has been defined, you can define a model in terms of the variables defined in the ini block. You can also mix RxODE blocks into the model if needed.

The current method of defining a nlmixr model is to specify the parameters, and then any required RxODE lines. Continuing the annotated example:

A few points to note:

  • Parameters are defined before the differential equations. Currently directly defining the differential equations in terms of the population parameters is not supported.
  • The differential equations, parameters and error terms are in a single block, instead of multiple sections.
  • Additionally state names, calculated variables, also cannot start with either rx_ or nlmixr_ since these are used internally in some estimation routines.
  • Errors are specified using the tilde, ~. Currently you can use either add(parameter) for additive error, prop(parameter) for proportional error or add(parameter1) + prop(parameter2) for combined additive and proportional error. You can also specify norm(parameter) for additive error, since it follows a normal distribution.
  • Some routines, like saem, require parameters expressed in terms of Pop.Parameter + Individual.Deviation.Parameter + Covariate*Covariate.Parameter. The order of these parameters does not matter. This is similar to NONMEM’s mu-referencing, though not as restrictive. This means that for saem, a parameterization of the form Cl <- Cl*exp(eta.Cl) is not allowed.
  • The type of parameter in the model is determined by the ini block; covariates used in the model are not included in the ini block. These variables need to be present in the modeling dataset for the model to run.

Running models

Models can be fitted several ways, including via the [magrittr] forward-pipe operator.

fit <- nlmixr(one.compartment) %>% saem.fit(data=theo_sd)
fit2 <- nlmixr(one.compartment, data=theo_sd, est="saem")
fit3 <- one.compartment %>% saem.fit(data=theo_sd)

Options to the estimation routines can be specified using nlmeControl for nlme estimation:

fit4 <- nlmixr(one.compartment, theo_sd,est="nlme",control = nlmeControl(pnlsTol = .5))

where options are specified in the nlme documentation. Options for saem can be specified using saemControl:

fit5 <- nlmixr(one.compartment,theo_sd,est="saem",control=saemControl(n.burn=250,n.em=350,print=50))

this example specifies 250 burn-in iterations, 350 em iterations and a print progress every 50 runs.

Model Syntax for solved PK systems

Solved PK systems are also currently supported by nlmixr with the ‘linCmt()’ pseudo-function. An annotated example of a solved system is below:

A few things to keep in mind: * Currently the solved systems support either oral dosing, IV dosing or IV infusion dosing and does not allow mixing the dosing types. * While RxODE allows mixing of solved systems and ODEs, this has not been implemented in nlmixr yet. * The solved systems implemented are the one, two and three compartment models with or without first-order absorption. Each of the models support a lag time with a tlag parameter. * In general the linear compartment model figures out the model by the parameter names. nlmixr currently knows about numbered volumes, Vc/Vp, Clearances in terms of both Cl and Q/CLD. Additionally nlmixr knows about elimination micro-constants (ie K12). Mixing of these parameters for these models is currently not supported.

Checking model syntax

After specifying the model syntax you can check that nlmixr is interpreting it correctly by using the nlmixr function on it. Using the above function we can get:

nlmixr(f)
#> ▂▂ RxODE-based 1-compartment model with first-order absorption ▂▂▂▂▂▂▂▂▂▂▂▂ 
#> ── Initialization: ──────────────────────────────────────────────────────── 
#> Fixed Effects ($theta): 
#>     lCl     lVc     lKA 
#> 1.60000 4.49981 0.10000 
#> 
#> Omega ($omega): 
#>        eta.Cl eta.Vc eta.KA
#> eta.Cl    0.1    0.0    0.0
#> eta.Vc    0.0    0.1    0.0
#> eta.KA    0.0    0.0    0.1
#> ── μ-referencing ($muRefTable): ─────────────────────────────────────────── 
#> ┌─────────┬─────────┐
#> │ theta   │ eta     │
#> ├─────────┼─────────┤
#> │ lCl     │ eta.Cl  │
#> ├─────────┼─────────┤
#> │ lVc     │ eta.Vc  │
#> ├─────────┼─────────┤
#> │ lKA     │ eta.KA  │
#> └─────────┴─────────┘
#> ── Model: ───────────────────────────────────────────────────────────────── 
#>         Cl <- exp(lCl + eta.Cl)
#>         Vc = exp(lVc + eta.Vc)
#>         KA <- exp(lKA + eta.KA)
#>         ## Instead of specifying the ODEs, you can use
#>         ## the linCmt() function to use the solved system.
#>         ##
#>         ## This function determines the type of PK solved system
#>         ## to use by the parameters that are defined.  In this case
#>         ## it knows that this is a one-compartment model with first-order
#>         ## absorption.
#>         linCmt() ~ prop(prop.err) 
#> ▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂

In general this gives you information about the model (what type of solved system/RxODE), initial estimates as well as the code for the model block.