The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The article by Dong and Buxton, 20061 describes a Markov model for the
assessment of a new Computer-Assisted Surgery (CAS) technology for Total
Knee Replacement (TKR). This vignette follows their model to calculate
the cost-effectiveness of conventional and computer-assisted TKR surgery
using rdecision
.
The authors present both a point estimate and a probabilistic
interpretation. Both approaches are shown to highlight the similarities
and differences when using rdecision
.
The patient journey is represented by 9 possible states, with each patient assigned to exactly one state at any time:
<- c(
states A = "TKR operation for knee problems",
B = "TKR with serious complications",
C = "TKR with minor complications",
D = "Normal health after primary TKR",
E = "Complex revision",
F = "Simple revision",
G = "Other treatments",
H = "Normal health after TKR revision",
I = "Death"
)
To assess cost effectiveness, the model must incorporate utilities and costs for each state.
Utilities apply in a continuous manner: the longer
the time a patient spends in a particular state, the longer a particular
utility value applies. This is represented in rdecision
by
assigning the utility to a particular MarkovState
. This
ensures that they are correctly scaled to different cycle lengths, and
that the annual discount rate is applied correctly.
<- 0.72
utility_A <- 0.35
utility_B <- 0.66
utility_C <- 0.78
utility_D <- 0.51
utility_E <- 0.66
utility_F <- 0.72
utility_G <- 0.68
utility_H <- 0.00 utility_I
Costs, in this case, represent one-time expenses
associated with the delivery of a surgical procedure, such as a TKR
operation or a revision. As such, they do not depend on the time the
patient spends in a particular state, and are represented by a cost
assigned to a Transition
. Ongoing costs that apply as long
as the patient remains in a state, for example medication, would instead
be assigned to the relevant MarkovState
. The costs
associated with revisions/further treatments (states E,
F and G) can be assigned to the
transitions from any other state into these states. For the cost of the
primary TKR operation (state A), the Markov model is
designed with no incoming transitions to the state, so the cost of this
operation can be included into the model by linking it to the
transitions from A to other states. When applying costs
to transitions, it’s important to consider the possible transitions and
ensure that no “double-counting” occurs (for example by transitions that
return to the same state).
<- 5197.0
cost_A <- 0.0
cost_B <- 0.0
cost_C <- 0.0
cost_D <- 7326.0
cost_E <- 6234.0
cost_F <- 2844.0
cost_G <- 0.0
cost_H <- 0.0
cost_I <- 235.0 cost_CAS
The initial deterministic analysis uses numerical best estimates for each parameter (cost, utility, transition probability etc.). This results in a model with an exact solution, but no estimate of the uncertainty, or of the impact of individual parameters on the uncertainty of the outcome.
The Markov state transition model specifies which transitions are
permitted between states. Each of the 9 states is represented in
rdecision
by a MarkovState
object, with a
description, a state cost and a state utility. Each allowed transition
is represented by a Transition
object, defined by the
source state, the target state, and a transition cost. In all cases,
cost defaults to 0.0 if not set, and utility to 1.0.
In their model, Dong and Buxton incorporate the cost of the CAS
technology as an additional cost attached to interventions - in this
case, this would affect cost_A
, cost_E
and
cost_F
(but not cost_G
which represents
non-surgical procedures). Utilities are unchanged and therefore the same
Markov state definitions can be used by both models.
# Markov states
<- MarkovState$new(states["A"], utility = utility_A)
sA <- MarkovState$new(states["B"], utility = utility_B)
sB <- MarkovState$new(states["C"], utility = utility_C)
sC <- MarkovState$new(states["D"], utility = utility_D)
sD <- MarkovState$new(states["E"], utility = utility_E)
sE <- MarkovState$new(states["F"], utility = utility_F)
sF <- MarkovState$new(states["G"], utility = utility_G)
sG <- MarkovState$new(states["H"], utility = utility_H)
sH <- MarkovState$new(states["I"], utility = utility_I)
sI <- list(sA, sB, sC, sD, sE, sF, sG, sH, sI)
States
# Transitions
<- Transition$new(sA, sD, cost = cost_A)
tAD <- Transition$new(sA, sC, cost = cost_A)
tAC <- Transition$new(sA, sB, cost = cost_A)
tAB <- Transition$new(sB, sC)
tBC <- Transition$new(sB, sE, cost = cost_E)
tBE <- Transition$new(sB, sF, cost = cost_F)
tBF <- Transition$new(sB, sG, cost = cost_G)
tBG <- Transition$new(sC, sB)
tCB <- Transition$new(sC, sD)
tCD <- Transition$new(sC, sF, cost = cost_F)
tCF <- Transition$new(sC, sG, cost = cost_G)
tCG <- Transition$new(sC, sC)
tCC <- Transition$new(sD, sC)
tDC <- Transition$new(sD, sB)
tDB <- Transition$new(sD, sD)
tDD <- Transition$new(sE, sB)
tEB <- Transition$new(sE, sH)
tEH <- Transition$new(sF, sB)
tFB <- Transition$new(sF, sC)
tFC <- Transition$new(sF, sG, cost = cost_G)
tFG <- Transition$new(sF, sH)
tFH <- Transition$new(sG, sB)
tGB <- Transition$new(sG, sC)
tGC <- Transition$new(sG, sF, cost = cost_F)
tGF <- Transition$new(sG, sD)
tGD <- Transition$new(sH, sE, cost = cost_E)
tHE <- Transition$new(sH, sF, cost = cost_F)
tHF <- Transition$new(sH, sH)
tHH <- Transition$new(sB, sI)
tBI <- Transition$new(sC, sI)
tCI <- Transition$new(sD, sI)
tDI <- Transition$new(sE, sI)
tEI <- Transition$new(sF, sI)
tFI <- Transition$new(sG, sI)
tGI <- Transition$new(sH, sI)
tHI <- Transition$new(sI, sI)
tII
# Transitions incorporating CAS cost
<- Transition$new(sA, sD, cost = cost_A + cost_CAS)
tAD_CAS <- Transition$new(sA, sC, cost = cost_A + cost_CAS)
tAC_CAS <- Transition$new(sA, sB, cost = cost_A + cost_CAS)
tAB_CAS <- Transition$new(sB, sE, cost = cost_E + cost_CAS)
tBE_CAS <- Transition$new(sB, sF, cost = cost_F + cost_CAS)
tBF_CAS <- Transition$new(sC, sF, cost = cost_F + cost_CAS)
tCF_CAS <- Transition$new(sG, sF, cost = cost_F + cost_CAS)
tGF_CAS <- Transition$new(sH, sE, cost = cost_E + cost_CAS)
tHE_CAS <- Transition$new(sH, sF, cost = cost_F + cost_CAS)
tHF_CAS
<- list(
Transitions_base
tAD, tAC, tAB, tBC, tBE, tBF, tBG, tCB, tCD, tCF, tCG, tCC,
tDC, tDB, tDD, tEB, tEH, tFB, tFC, tFG, tFH, tGB, tGC, tGF,
tGD, tHE, tHF, tHH, tBI, tCI, tDI, tEI, tFI, tGI, tHI, tII
)
<- list(
Transitions_CAS
tAD_CAS, tAC_CAS, tAB_CAS, tBC, tBE_CAS, tBF_CAS, tBG, tCB, tCD, tCF_CAS,
tCG, tCC, tDC, tDB, tDD, tEB, tEH, tFB, tFC, tFG, tFH, tGB, tGC, tGF_CAS,
tGD, tHE_CAS, tHF_CAS, tHH, tBI, tCI, tDI, tEI, tFI, tGI, tHI, tII )
To fully define a semi Markov model, we assign the States,
Transitions, duration of a cycle (as a difftime
object)
and, optionally, annual discount.cost
and
discount.utility
.
<- SemiMarkovModel$new(
SMM_base V = States, E = Transitions_base,
tcycle = as.difftime(365.25 / 12L, units = "days"), # cycles in months
discount.cost = 0.035,
discount.utility = 0.035
)
<- SemiMarkovModel$new(
SMM_CAS V = States, E = Transitions_CAS,
tcycle = as.difftime(365.25 / 12L, units = "days"), # cycles in months
discount.cost = 0.035,
discount.utility = 0.035
)
::pander(
pander$tabulate_states()[, c("Name", "Utility")], justify = "ll"
SMM_base )
Name | Utility |
---|---|
Simple revision | 0.66 |
TKR operation for knee problems | 0.72 |
Complex revision | 0.51 |
Normal health after TKR revision | 0.68 |
TKR with minor complications | 0.66 |
TKR with serious complications | 0.35 |
Other treatments | 0.72 |
Normal health after primary TKR | 0.78 |
Death | 0 |
The SemiMarkovModel
object can be represented
graphically in the DOT format using the as_DOT()
method.
The DiagrammeR
package can then be used to plot DOT files
with the grViz
function.
::grViz(SMM_base$as_DOT(rankdir = "TB", width = 7.0, height = 7.0)) DiagrammeR
For each allowed transition between a pair of states, probabilities
are reported in Table 2. These are added to the model using the
set_probabilities
method, which takes for input a numerical
matrix of size N(states) x N(states).
For death outcomes, the probabilities are reported for primary TKR, TKR revision and all reasons: for each state, the relevant probability of death (transition to state I) will therefore be death(all reasons) + death(primary TKR or TKR revision). Note that Table 5 reports the death rates for TKR-specific death (primary or revision), which can be recovered by introducing separate Markov states in the model to represent the 3 possible causes of death, each with relevant transitions.
The total sum of all outgoing transitions for each state must be 1,
so one transition per state should be calculated as 1 - sum(all other
transitions). This is also verified by the
set_probabilities
method that will return an error if the
probability matrix does not have row-wise sums of 1. Alternatively,
exactly one cell in each row can be left undefined as NA
and it will automatically be populated to ensure a row-wise total of
1.
# Death
<- 0.00341
p_death_all <- 0.00046
p_death_primary <- 0.00151
p_death_revision
# Transitions
<- 0.01495
p_AtoB <- 0.04285
p_AtoC <- NA # alternatively: 1 - p_AtoB - p_AtoC
p_AtoD <- 0.01385
p_BtoC <- 0.02469
p_BtoE <- 0.00523
p_BtoF <- p_death_all + p_death_primary
p_BtoI <- NA # alternatively: 1 - p_BtoC - p_BtoE - p_BtoF - p_BtoI
p_BtoG <- 0.00921
p_CtoB <- 0.02505
p_CtoC <- 0.00250
p_CtoF <- 0.01701
p_CtoG <- p_death_all + p_death_primary
p_CtoI <- NA # alternatively: 1 - p_CtoB - p_CtoC - p_CtoF - p_CtoG - p_CtoI
p_CtoD <- 0.00921
p_DtoB <- 0.01385
p_DtoC <- p_death_all + p_death_primary
p_DtoI <- NA # alternatively: 1 - p_DtoB - p_DtoC - p_DtoI
p_DtoD <- 0.02545
p_EtoB <- p_death_all + p_death_revision
p_EtoI <- NA # alternatively: 1 - p_EtoB - p_EtoI
p_EtoH <- 0.01590
p_FtoB <- 0.00816
p_FtoC <- 0.01701
p_FtoG <- p_death_all + p_death_revision
p_FtoI <- NA # alternatively: 1 - p_FtoB - p_FtoC - p_FtoG - p_FtoI
p_FtoH <- 0.00921
p_GtoB <- 0.01385
p_GtoC <- 0.00250
p_GtoF <- p_death_all + p_death_primary
p_GtoI <- NA # alternatively: 1 - p_GtoB - p_GtoC - p_GtoF - p_GtoI
p_GtoD <- 0.02003
p_HtoE <- 0.01038
p_HtoF <- p_death_all + p_death_revision
p_HtoI <- NA # alternatively: 1 - p_HtoE - p_HtoF - p_HtoI
p_HtoH <- 1.0
p_ItoI
# Set transition probabilities
<- matrix(c(
Pt
0L, p_AtoB, p_AtoC, p_AtoD, 0L, 0L, 0L, 0L, 0L,
0L, 0L, p_BtoC, 0L, p_BtoE, p_BtoF, p_BtoG, 0L, p_BtoI,
0L, p_CtoB, p_CtoC, p_CtoD, 0L, p_CtoF, p_CtoG, 0L, p_CtoI,
0L, p_DtoB, p_DtoC, p_DtoD, 0L, 0L, 0L, 0L, p_DtoI,
0L, p_EtoB, 0L, 0L, 0L, 0L, 0L, p_EtoH, p_EtoI,
0L, p_FtoB, p_FtoC, 0L, 0L, 0L, p_FtoG, p_FtoH, p_FtoI,
0L, p_GtoB, p_GtoC, p_GtoD, 0L, p_GtoF, 0L, 0L, p_GtoI,
0L, 0L, 0L, 0L, p_HtoE, p_HtoF, 0L, p_HtoH, p_HtoI,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, p_ItoInrow = 9L, byrow = TRUE, dimnames = list(
), source = states[LETTERS[1L:9L]], target = states[LETTERS[1L:9L]]
))$set_probabilities(Pt) SMM_base
The CAS technology affects patient outcomes by reducing the
probability of serious complications by about 34%. This applies to all
transition probabilities into state B, which correspond
to column 2 in the transition matrix. Note that this also requires
adjusting the row-wise sum to 1, by increasing the probability of making
a transition to another state. It is assumed the states with increased
transition probabilities are those defined as NA
in matrix
Pt
. Function txeffect
adjusts the transition
probability matrix, given a relative risk.
<- function(Pt, rr) {
txeffect # copy transition matrix
<- Pt
pr # calculate reduced probability of transition to state B
<- states[["B"]]
derisk <- pr[, derisk] * rr
dpb # reduce the probability of making a transition to state B and increase the
# probability of making a transition elsewhere by the same amount
<- c("D", "G", "D", "D", "H", "H", "D", "H", "I")
uprisks for (i in 1L:9L) {
<- states[[i]]
s <- pr[[s, derisk]] - dpb[[i]]
pr[[s, derisk]] <- states[[uprisks[[i]]]]
uprisk <- pr[[s, uprisk]] + dpb[[i]]
pr[[s, uprisk]]
}return(pr)
}
The effect is applied to the transition matrix for conventional surgery to get the transition matrix for computer-assisted surgery, as follows.
# apply CAS_effect to the transition matrix
<- 0.34
CAS_effect <- txeffect(Pt, CAS_effect)
Pt_CAS $set_probabilities(Pt_CAS) SMM_CAS
The simulation described in the paper starts with 1000 patients and
progresses for 10 years, or 120 one-month cycles. At the start of the
model, all patients are in state A. The initial state
of the simulation can be entered into the model by applying the
reset
function with a named array describing the number of
simulated patients for each named state (in this case, 1000 in state
A and none in the other states).
The cycles
function simulates the progress of the
initial population through the specified number of cycles, and,
optionally, can apply half-cycle correction to population and cost
(hcc.pop
and hcc.cost
). In this example,
half-cycle correction is not used.
# create starting populations
<- 1000L
N <- c(N, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L)
populations names(populations) <- states
# run 120 one-month cycles for both models
$reset(populations)
SMM_base<- SMM_base$cycles(
SMM_base_10years ncycles = 120L, hcc.pop = FALSE, hcc.cost = FALSE
)$reset(populations)
SMM_CAS<- SMM_CAS$cycles(
SMM_CAS_10years ncycles = 120L, hcc.pop = FALSE, hcc.cost = FALSE
)
Running the cycles
function returns a data frame with
the population in each state, the progress of the simulation (in cycles
and years), the total cost (including both transition and occupancy
costs), and the calculated QALY. Costs and QALYs are discounted by the
rates specified in the model definition and are normalised by the
initial population.
The table below displays the first few cycles of the model, showing the transition of patients from state A at cycle 0 (starting state) through the 3 allowed transitions to states B, C and D, and subsequently to further states depending on their assigned transition probabilities. Each cycle accrues a cost (the cost of TKR surgery is attached here to cycle 1, as it was assigned to the transitions out of A) and this, with the utilities associated with each state, can be used to calculate the QALY.
Cycle | Years | Cost | QALY | Simple revision | TKR operation for knee problems | Complex revision | Normal health after TKR revision | TKR with minor complications | TKR with serious complications |
---|---|---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 0 | 1000 | 0 | 0 | 0 | 0 |
1 | 0.08333 | 5182 | 0.06193 | 0 | 0 | 0 | 0 | 42.85 | 14.95 |
2 | 0.1667 | 46.16 | 0.06384 | 0.1853 | 0 | 0.3691 | 0 | 14.33 | 9.072 |
3 | 0.25 | 27.43 | 0.06363 | 0.1207 | 0 | 0.224 | 0.5347 | 13.95 | 9.098 |
4 | 0.3333 | 27.42 | 0.06321 | 0.1102 | 0 | 0.2353 | 0.8481 | 13.89 | 9.055 |
Other treatments | Normal health after primary TKR | Death |
---|---|---|
0 | 0 | 0 |
0 | 942.2 | 0 |
14.97 | 957.2 | 3.87 |
8.887 | 959.5 | 7.726 |
8.904 | 955.4 | 11.57 |
This progression can also be visualised using the base graphics
function barplot
:
Dong and Buxton1 report
results yearly (corresponding to cycles 12, 24, 36 etc.) with some
cumulative values. The cycles
function of
SemiMarkovModel
returns a Markov trace (state occupancies
at the end of each cycle, with per-cycle costs and QALYs) in the form of
a data frame. The Markov trace can be converted to their Table 4 form
via the function as_table4
.
# convert a Markov trace (matrix) with monthly cycles to an annual summary
# matrix with cumulative values, as per Dong and Buxton, Table 4
<- function(m_trace) {
as_table4 # calculate cumulative event counts
<- apply(m_trace, 2L, cumsum)
m_cum # create annual summary table
<- matrix(
m_t4 data = c(
"Years"],
m_trace[, "TKR with serious complications"] / 10L,
m_cum[, "TKR with minor complications"] / 10L,
m_cum[, "Complex revision"] / 10L,
m_cum[, "Simple revision"] / 10L,
m_cum[, "Death"] / 10L,
m_trace[, "Cost"] * 1000L,
m_cum[, "QALY"] * 1000L
m_cum[,
),dimnames = list(
NULL,
c(
"Year",
"Cumulative serious complication (%)",
"Cumulative minor complication (%)",
"Cumulative complex revision (%)",
"Cumulative simple revision (%)",
"Cumulative death (%)",
"Discounted costs (£)",
"Discounted QALYs"
)
),nrow = 121L, ncol = 8L
)# return in table 4 format
<- (1L:10L) * 12L + 1L
yearly return(m_t4[yearly, ])
}
Replication of their Table 4 is given in the next two sections. Note that these figures do not match exactly the published figures for costs and QALYs. This might be a different application of the discount rate, or rounding errors associated with the representation of currency in Excel. Small differences in the CAS calculations can also probably be traced to rounding of the 34% CAS effect figure.
<- as_table4(SMM_base_10years) t4_CS
Year | Cumulative serious complication (%) | Cumulative minor complication (%) | Cumulative complex revision (%) | Cumulative simple revision (%) | Cumulative death (%) | Discounted costs (£) | Discounted QALYs |
---|---|---|---|---|---|---|---|
1 | 11.33 | 19.40 | 0.29 | 0.14 | 4.18 | 5,497,782 | 743.2 |
2 | 21.55 | 35.09 | 0.66 | 0.32 | 8.54 | 5,804,816 | 1,430.8 |
3 | 31.28 | 50.00 | 1.08 | 0.52 | 12.71 | 6,094,488 | 2,064.7 |
4 | 40.54 | 64.18 | 1.56 | 0.76 | 16.69 | 6,367,468 | 2,648.9 |
5 | 49.34 | 77.67 | 2.08 | 1.02 | 20.49 | 6,624,444 | 3,187.4 |
6 | 57.71 | 90.50 | 2.64 | 1.30 | 24.13 | 6,866,113 | 3,683.7 |
7 | 65.68 | 102.70 | 3.23 | 1.59 | 27.60 | 7,093,174 | 4,141.0 |
8 | 73.26 | 114.30 | 3.84 | 1.90 | 30.91 | 7,306,325 | 4,562.6 |
9 | 80.47 | 125.34 | 4.48 | 2.22 | 34.08 | 7,506,253 | 4,951.0 |
10 | 87.32 | 135.83 | 5.13 | 2.55 | 37.10 | 7,693,634 | 5,309.0 |
<- as_table4(SMM_CAS_10years) t4_CAS
Year | Cumulative serious complication (%) | Cumulative minor complication (%) | Cumulative complex revision (%) | Cumulative simple revision (%) | Cumulative death (%) | Discounted costs (£) | Discounted QALYs |
---|---|---|---|---|---|---|---|
1 | 7.50 | 19.41 | 0.19 | 0.11 | 4.18 | 5,630,136 | 744.7 |
2 | 14.28 | 35.12 | 0.44 | 0.24 | 8.54 | 5,838,261 | 1,433.9 |
3 | 20.73 | 50.07 | 0.73 | 0.40 | 12.70 | 6,035,213 | 2,069.2 |
4 | 26.88 | 64.31 | 1.06 | 0.57 | 16.68 | 6,221,358 | 2,654.9 |
5 | 32.74 | 77.86 | 1.42 | 0.76 | 20.48 | 6,397,084 | 3,194.8 |
6 | 38.31 | 90.77 | 1.81 | 0.97 | 24.11 | 6,562,793 | 3,692.5 |
7 | 43.62 | 103.05 | 2.22 | 1.18 | 27.58 | 6,718,897 | 4,151.3 |
8 | 48.67 | 114.75 | 2.65 | 1.41 | 30.88 | 6,865,813 | 4,574.2 |
9 | 53.48 | 125.89 | 3.09 | 1.64 | 34.04 | 7,003,958 | 4,964.0 |
10 | 58.06 | 136.49 | 3.55 | 1.89 | 37.06 | 7,133,747 | 5,323.3 |
PSA adds a randomisation-based process to estimate not only the central value of the estimate, but also its uncertainty. This is done by representing individual parameters not as single numerical values but as distributions, from which a range of possible values can be extracted at random. The choice of distribution type and parametrisation depend on the type of variable and the values reported in the literature or observed in a relevant patient cohort.
These are constrained to a value between 0 and 1, and have been described with a Beta distribution:
A Beta function, with the mean equal to the point estimate and a high variance to reflect the uncertainty, was used to generate a random utility for each Markov state
The specific parameters for the distributions are not reported in the paper, where instead a point estimate and a range (95% CI) are reported for each utility distribution.
The Beta distribution can be parametrised using mean \(\mu\) and sample size \(\nu\) (which relate to the more common parameters as \(\alpha = \mu * \nu\) and \(\beta = (1 - \mu) * \nu)\), using the point estimate as mean and assessing numerically the sample size until the confidence intervals match the reported range. Not all the ranges provided map to a valid Beta distribution with the given mean, suggesting that the authors used a different method to derive their Beta distributions. Note also that the authors report that
To ensure a plausible relationship between the utilities of states “Normal health after primary TKR,” “TKR with minor complications,” and “TKR with serious complications,” the process was structured so that the hierarchical relationship was retained but the differences between the utilities were randomly drawn from the distribution.
but there is no description of the method used and/or how the resulting values are constrained to the [0, 1] interval.
<- 0.42
utility_A_nu <- 0.80
utility_B_nu <- 0.32
utility_C_nu <- 0.34
utility_D_nu <- 0.55
utility_E_nu <- 0.57
utility_F_nu <- 0.34
utility_G_nu <- 0.38 utility_H_nu
These parameters can now be used to describe the distributions that
each utility value should be drawn from. This is represented in
rdecision
by a ModVar
variable, in this case
specifically the BetaModVar
child class. This type of
ModVar
requires the alpha
and
beta
parameters, which can be calculated according to the
equations above. Note that for state I (Death), there
is no need to define a ModVar
, as the utility of this state
is not associated with any uncertainty.
<- BetaModVar$new(
utility_A_beta "Utility of state A", "",
* utility_A_nu, (1.0 - utility_A) * utility_A_nu
utility_A
)<- BetaModVar$new(
utility_B_beta "Utility of state B", "",
* utility_B_nu, (1.0 - utility_B) * utility_B_nu
utility_B
)<- BetaModVar$new(
utility_C_beta "Utility of state C", "",
* utility_C_nu, (1.0 - utility_C) * utility_C_nu
utility_C
)<- BetaModVar$new(
utility_D_beta "Utility of state D", "",
* utility_D_nu, (1.0 - utility_D) * utility_D_nu
utility_D
)<- BetaModVar$new(
utility_E_beta "Utility of state E", "",
* utility_E_nu, (1.0 - utility_E) * utility_E_nu
utility_E
)<- BetaModVar$new(
utility_F_beta "Utility of state F", "",
* utility_F_nu, (1.0 - utility_F) * utility_F_nu
utility_F
)<- BetaModVar$new(
utility_G_beta "Utility of state G", "",
* utility_G_nu, (1.0 - utility_G) * utility_G_nu
utility_G
)<- BetaModVar$new(
utility_H_beta "Utility of state H", "",
* utility_H_nu, (1.0 - utility_H) * utility_H_nu
utility_H )
These can take any non-negative value, and have been described with a Gamma distribution:
Variance for the Gamma distribution was estimated from the ranges. A Gamma function was used to generate a random cost for each Markov state.
These distributions are also reported as means and 95% CI. In this case, the means of the Gamma distributions are sufficiently far from 0 that the approximation to a Gaussian distribution is acceptable to estimate the value of the standard deviation.
The GammaModVar
class requires the parameters
shape
and scale
, which relate to mean \(\mu\) and variance \(\sigma^2\) as \(shape = \mu^2 / \sigma^2\) and \(scale = \sigma^2 / \mu\).
<- (6217L - 4218L) / (2L * 1.96)
cost_A_stdev <- (11307L - 5086L) / (2L * 1.96)
cost_E_stdev <- (7972L - 5043L) / (2L * 1.96)
cost_F_stdev <- (5579L - 1428L) / (2L * 1.96)
cost_G_stdev
<- GammaModVar$new(
cost_A_gamma "Cost of state A", "", cost_A ^ 2L / cost_A_stdev ^ 2L,
^ 2L / cost_A
cost_A_stdev
)<- GammaModVar$new(
cost_E_gamma "Cost of state E", "", cost_E ^ 2L / cost_E_stdev ^ 2L,
^ 2L / cost_E
cost_E_stdev
)<- GammaModVar$new(
cost_F_gamma "Cost of state F", "", cost_F ^ 2L / cost_F_stdev ^ 2L,
^ 2L / cost_F
cost_F_stdev
)<- GammaModVar$new(
cost_G_gamma "Cost of state G", "", cost_G ^ 2L / cost_G_stdev ^ 2L,
^ 2L / cost_G
cost_G_stdev )
The cost of CAS is also modelled with a Gamma distribution, but in this case the authors estimate the distribution parameters directly:
We assumed that the ratio of its standard deviation over mean was four times as large as that of conventional primary TKR
As a result, the total cost of a surgical procedure is the sum of the
surgery cost and the CAS cost, each of which is independently drawn from
a defined gamma distribution. This can be easily represented in
rdecision
using an ExprModVar
: a model
variable that uses other model variables as operands. The expression
needs to be quoted using the rlang::quo
command in order to
be stored rather than executed straight away.
<- 4L * cost_A_stdev / cost_A * cost_CAS
cost_CAS_stdev <- GammaModVar$new(
cost_CAS_gamma "Cost of CAS", "", cost_CAS ^ 2L / cost_CAS_stdev ^ 2L,
^ 2L / cost_CAS
cost_CAS_stdev
)<- ExprModVar$new(
cost_A_CAS "Cost of state A with CAS", "", rlang::quo(cost_A_gamma + cost_CAS_gamma)
)<- ExprModVar$new(
cost_E_CAS "Cost of state E with CAS", "", rlang::quo(cost_E_gamma + cost_CAS_gamma)
)<- ExprModVar$new(
cost_F_CAS "Cost of state F with CAS", "", rlang::quo(cost_F_gamma + cost_CAS_gamma)
)
The CAS effect in this model is a reduction in the transition probabilities to state B, and was modelled using a lognormal distribution:
Lognormal function was used to generate a random “effect of CAS.” The variance of the “effect of CAS” was estimated from the clinical trials
Because a lognormally distributed value is always positive, the resulting transition probabilities will be reduced by a non-zero fraction (note however that values > 100% are possible, which would result an invalid negative transition probability).
While it is possible to reconstruct the exact parameters of a lognormal distribution given a mean and 95% CI, the article does not report a range for this variable. Because exact replication is therefore impossible, an arbitrary parametrisation is used here that is unlikely to yield an illegal value.
<- 0.34
CAS_effect_mean <- 1.25
CAS_effect_sd <- LogNormModVar$new(
CAS_effect_lognorm "Effect of CAS", "", log(CAS_effect_mean), log(CAS_effect_sd)
)
The same structure as the model used for the deterministic solution can be used again, but in this case the costs and utilities will be represented by distributions.
# Markov states as Beta distributions
<- MarkovState$new(states["A"], utility = utility_A_beta)
sA_PSA <- MarkovState$new(states["B"], utility = utility_B_beta)
sB_PSA <- MarkovState$new(states["C"], utility = utility_C_beta)
sC_PSA <- MarkovState$new(states["D"], utility = utility_D_beta)
sD_PSA <- MarkovState$new(states["E"], utility = utility_E_beta)
sE_PSA <- MarkovState$new(states["F"], utility = utility_F_beta)
sF_PSA <- MarkovState$new(states["G"], utility = utility_G_beta)
sG_PSA <- MarkovState$new(states["H"], utility = utility_H_beta)
sH_PSA # state I has no uncertainty associated with it, so a probabilistic
# representation is not required
<- list(
States_PSA
sA_PSA, sB_PSA, sC_PSA, sD_PSA, sE_PSA, sF_PSA, sG_PSA, sH_PSA, sI
)
# Transition costs as Gamma distributions
<- Transition$new(sA_PSA, sD_PSA, cost = cost_A_gamma)
tAD_PSA <- Transition$new(sA_PSA, sC_PSA, cost = cost_A_gamma)
tAC_PSA <- Transition$new(sA_PSA, sB_PSA, cost = cost_A_gamma)
tAB_PSA <- Transition$new(sB_PSA, sC_PSA)
tBC_PSA <- Transition$new(sB_PSA, sE_PSA, cost = cost_E_gamma)
tBE_PSA <- Transition$new(sB_PSA, sF_PSA, cost = cost_F_gamma)
tBF_PSA <- Transition$new(sB_PSA, sG_PSA, cost = cost_G_gamma)
tBG_PSA <- Transition$new(sC_PSA, sB_PSA)
tCB_PSA <- Transition$new(sC_PSA, sD_PSA)
tCD_PSA <- Transition$new(sC_PSA, sF_PSA, cost = cost_F_gamma)
tCF_PSA <- Transition$new(sC_PSA, sG_PSA, cost = cost_G_gamma)
tCG_PSA <- Transition$new(sC_PSA, sC_PSA)
tCC_PSA <- Transition$new(sD_PSA, sC_PSA)
tDC_PSA <- Transition$new(sD_PSA, sB_PSA)
tDB_PSA <- Transition$new(sD_PSA, sD_PSA)
tDD_PSA <- Transition$new(sE_PSA, sB_PSA)
tEB_PSA <- Transition$new(sE_PSA, sH_PSA)
tEH_PSA <- Transition$new(sF_PSA, sB_PSA)
tFB_PSA <- Transition$new(sF_PSA, sC_PSA)
tFC_PSA <- Transition$new(sF_PSA, sG_PSA, cost = cost_G_gamma)
tFG_PSA <- Transition$new(sF_PSA, sH_PSA)
tFH_PSA <- Transition$new(sG_PSA, sB_PSA)
tGB_PSA <- Transition$new(sG_PSA, sC_PSA)
tGC_PSA <- Transition$new(sG_PSA, sF_PSA, cost = cost_F_gamma)
tGF_PSA <- Transition$new(sG_PSA, sD_PSA)
tGD_PSA <- Transition$new(sH_PSA, sE_PSA, cost = cost_E_gamma)
tHE_PSA <- Transition$new(sH_PSA, sF_PSA, cost = cost_F_gamma)
tHF_PSA <- Transition$new(sH_PSA, sH_PSA)
tHH_PSA <- Transition$new(sB_PSA, sI)
tBI_PSA <- Transition$new(sC_PSA, sI)
tCI_PSA <- Transition$new(sD_PSA, sI)
tDI_PSA <- Transition$new(sE_PSA, sI)
tEI_PSA <- Transition$new(sF_PSA, sI)
tFI_PSA <- Transition$new(sG_PSA, sI)
tGI_PSA <- Transition$new(sH_PSA, sI)
tHI_PSA # transition I to I also has no probabilistic elements
# Transitions incorporating CAS cost
<- Transition$new(sA_PSA, sD_PSA, cost = cost_A_CAS)
tAD_CAS_PSA <- Transition$new(sA_PSA, sC_PSA, cost = cost_A_CAS)
tAC_CAS_PSA <- Transition$new(sA_PSA, sB_PSA, cost = cost_A_CAS)
tAB_CAS_PSA <- Transition$new(sB_PSA, sE_PSA, cost = cost_E_CAS)
tBE_CAS_PSA <- Transition$new(sB_PSA, sF_PSA, cost = cost_F_CAS)
tBF_CAS_PSA <- Transition$new(sC_PSA, sF_PSA, cost = cost_F_CAS)
tCF_CAS_PSA <- Transition$new(sG_PSA, sF_PSA, cost = cost_F_CAS)
tGF_CAS_PSA <- Transition$new(sH_PSA, sE_PSA, cost = cost_E_CAS)
tHE_CAS_PSA <- Transition$new(sH_PSA, sF_PSA, cost = cost_F_CAS)
tHF_CAS_PSA
<- list(
Transitions_base_PSA
tAD_PSA, tAC_PSA, tAB_PSA,
tBC_PSA, tBE_PSA, tBF_PSA, tBG_PSA, tBI_PSA,
tCB_PSA, tCD_PSA, tCF_PSA, tCG_PSA, tCC_PSA, tCI_PSA,
tDC_PSA, tDB_PSA, tDD_PSA, tDI_PSA,
tEB_PSA, tEH_PSA, tEI_PSA,
tFB_PSA, tFC_PSA, tFG_PSA, tFH_PSA, tFI_PSA,
tGB_PSA, tGC_PSA, tGF_PSA, tGD_PSA, tGI_PSA,
tHE_PSA, tHF_PSA, tHH_PSA, tHI_PSA,
tII
)
<- list(
Transitions_CAS_PSA
tAD_CAS_PSA, tAC_CAS_PSA, tAB_CAS_PSA,
tBC_PSA, tBE_CAS_PSA, tBF_CAS_PSA, tBG_PSA, tBI_PSA,
tCB_PSA, tCD_PSA, tCF_CAS_PSA, tCG_PSA, tCC_PSA, tCI_PSA,
tDC_PSA, tDB_PSA, tDD_PSA, tDI_PSA,
tEB_PSA, tEH_PSA, tEI_PSA,
tFB_PSA, tFC_PSA, tFG_PSA, tFH_PSA, tFI_PSA,
tGB_PSA, tGC_PSA, tGF_CAS_PSA, tGD_PSA, tGI_PSA,
tHE_CAS_PSA, tHF_CAS_PSA, tHH_PSA, tHI_PSA,
tII
)
<- SemiMarkovModel$new(
SMM_base_PSA V = States_PSA, E = Transitions_base_PSA,
tcycle = as.difftime(365.25 / 12L, units = "days"), # cycles in months
discount.cost = 0.035,
discount.utility = 0.035
)
<- SemiMarkovModel$new(
SMM_CAS_PSA V = States_PSA, E = Transitions_CAS_PSA,
tcycle = as.difftime(365.25 / 12L, units = "days"), # cycles in months
discount.cost = 0.035,
discount.utility = 0.035
)
The current model includes a number of model variables, that can be
displayed using the modvar_table
method. This tabulation
can also be used to compare the 95% CI with the values reported in Table
3.
::pander(
pander$modvar_table()[, c(-2L, -5L, -9L)], justify = "llrrrr",
SMM_base_PSAround = 5L
)
Description | Distribution | Mean | SD | Q2.5 | Q97.5 |
---|---|---|---|---|---|
Cost of state A | Ga(103.861,50.038) | 5197 | 509.9 | 4246 | 6243 |
Cost of state F | Ga(69.609,89.557) | 6234 | 747.2 | 4856 | 7781 |
Cost of state E | Ga(21.31,343.781) | 7326 | 1587 | 4553 | 10748 |
Cost of state G | Ga(7.213,394.279) | 2844 | 1059 | 1163 | 5264 |
Utility of state G | Be(0.2448,0.0952) | 0.72 | 0.3879 | 5e-05 | 1 |
Utility of state E | Be(0.2805,0.2695) | 0.51 | 0.4015 | 2e-05 | 1 |
Utility of state C | Be(0.2112,0.1088) | 0.66 | 0.4123 | 0 | 1 |
Utility of state B | Be(0.28,0.52) | 0.35 | 0.3555 | 1e-05 | 0.9954 |
Utility of state D | Be(0.2652,0.0748) | 0.78 | 0.3579 | 0.00025 | 1 |
Utility of state A | Be(0.3024,0.1176) | 0.72 | 0.3768 | 0.00029 | 1 |
Utility of state H | Be(0.2584,0.1216) | 0.68 | 0.3971 | 4e-05 | 1 |
Utility of state F | Be(0.3762,0.1938) | 0.66 | 0.3781 | 0.00077 | 1 |
Note that the list of ModVar
s associated with the CAS
model includes not only the variables that were explicitly assigned to
States and Transitions, such as Utility of state A, but also
the implicit ModVar
s underlying ExprModVar
s,
such as Cost of CAS, which is used to calculate Cost of
state A with CAS.
::pander(
pander$modvar_table()[, c(-2L, -5L, -9L)], justify = "llrrrr", round = 5L
SMM_CAS_PSA )
Description | Distribution | Mean | SD | Q2.5 | Q97.5 |
---|---|---|---|---|---|
Cost of state E with CAS | cost_E_gamma + cost_CAS_gamma | 7561 | 1568 | 4911 | 10936 |
Cost of state E | Ga(21.31,343.781) | 7326 | 1587 | 4553 | 10748 |
Cost of CAS | Ga(6.491,36.202) | 235 | 92.24 | 90.47 | 447.3 |
Cost of state F with CAS | cost_F_gamma + cost_CAS_gamma | 6469 | 740.3 | 5157 | 8069 |
Cost of state F | Ga(69.609,89.557) | 6234 | 747.2 | 4856 | 7781 |
Cost of state G | Ga(7.213,394.279) | 2844 | 1059 | 1163 | 5264 |
Cost of state A with CAS | cost_A_gamma + cost_CAS_gamma | 5432 | 520.5 | 4437 | 6474 |
Cost of state A | Ga(103.861,50.038) | 5197 | 509.9 | 4246 | 6243 |
Utility of state C | Be(0.2112,0.1088) | 0.66 | 0.4123 | 0 | 1 |
Utility of state D | Be(0.2652,0.0748) | 0.78 | 0.3579 | 0.00025 | 1 |
Utility of state H | Be(0.2584,0.1216) | 0.68 | 0.3971 | 4e-05 | 1 |
Utility of state E | Be(0.2805,0.2695) | 0.51 | 0.4015 | 2e-05 | 1 |
Utility of state B | Be(0.28,0.52) | 0.35 | 0.3555 | 1e-05 | 0.9954 |
Utility of state G | Be(0.2448,0.0952) | 0.72 | 0.3879 | 5e-05 | 1 |
Utility of state A | Be(0.3024,0.1176) | 0.72 | 0.3768 | 0.00029 | 1 |
Utility of state F | Be(0.3762,0.1938) | 0.66 | 0.3781 | 0.00077 | 1 |
To generate a logical multi-Markov state probabilistic transition matrix from the initial point estimates, the Dirichlet distribution was used (11). We estimated a count for each Markov state by the transition probabilities (we assumed that the total counts equalled 1,000). We used random number and Gamma distribution formulae to generate a one-parameter (standard) Gamma distribution for each cell of the transition matrix. The one-parameter Gamma distribution in Excel was obtained by setting the first (alpha) parameter equal to the estimated count and the second (beta) parameter equal to 1. The final step was to “normalize” the realizations from the Gamma distribution back to the 0–1 scale by dividing each realization through by the corresponding row total.
The complex description above is an approximation of a Dirichlet
distribution when constrained by the functionalities of Excel. In
rdecision
, variables drawn from a multinomial probabilistic
distribution can be defined using in-built functions:
DirichletDistribution
from
the known counts or proportions for each exiting transition. If using
proportions, a population should also specified to correctly assign
variance to the probabilistic distribution. In this example, the authors
assumed a population of 1000.sample()
method) and
populate the allowed transitions with numerical values (r()
method). This is performed by the dirichletify
function.<- function(Pt, population = 1L) {
dirichletify # check argument
stopifnot(
is.matrix(Pt),
is.numeric(Pt),
nrow(Pt) == ncol(Pt),
all(dimnames(Pt)[[2L]] == dimnames(Pt)[[1L]])
)<- rowSums(is.na(Pt))
nNA <- rowSums(Pt, na.rm = TRUE)
sumP # check if a count or proportion representation
<- any(Pt > 1.0, na.rm = TRUE)
is_count if (is_count) {
# counts cannot have NAs
stopifnot(all(nNA == 0L))
# normalise into proportions
<- Pt / rowSums(Pt)
Pt else {
} # proportions can have NA values, but only 1/row
stopifnot(all(nNA <= 1L))
# store information about which variable was set as NA
<- apply(Pt, 1L, function(row) which(is.na(row)))
whichNA # populate missing values to a total of 1
for (row in names(nNA)[nNA > 0L]) {
<- 1.0 - sumP[row]
Pt[row, whichNA[[row]]]
}
}# build state-wise Dirichlet distributions
for (r in seq_len(nrow(Pt))) {
<- which(Pt[r, ] != 0.0)
non0 # if multiple outgoing transitions are possible, model as Dirichlet
if (length(non0) > 1L) {
<- DirichletDistribution$new(Pt[r, non0] * population)
dist $sample() # randomise
dist<- dist$r()
Pt[r, non0]
}# if only 1 transition is possible, leave as given originally
}return(Pt)
}
The following code executes 250 runs of the model for conventional surgery and for computer-assisted surgery, each of 120 cycles. For each run the set of model variables are sampled from their uncertainty distributions. The Markov traces for each run, in matrix form, are converted to the format used by Dong et al in their Table 4, and added to 3D arrays which store the traces from all the runs, separately for conventional and computer-assisted surgery.
The transition probability matrix is sampled from an assumed matrix
of counts using the dirichletify
function.
In the case of CAS, there is an additional step in the simulation
process due to the fact that the transition probabilities into state
B (column 2 in the transition matrix) are modified by a
multiplier, Effect of CAS, which is itself drawn from a
distribution. In this case, the relevant ModVar
needs to be
randomised at every cycle, and applied to the transition matrix.
This a “paired” simulation; that is, we are interested in the difference between computer-assisted surgery and conventional surgery, taking account of the variation in inputs. In this model, many variables are common to both models, and we make sure to randomise all variables once, before running each model, every cycle. This is analogous, in statistics, to a paired t-test rather than a two-sample t-test. When there are many common variables, the results of the two models will be correlated.
<- 250L
nruns <- array(
t4_CS_PSA dim = c(10L, ncol(t4_CS), nruns),
dimnames = list(NULL, colnames(t4_CS), NULL)
)<- array(
t4_CAS_PSA dim = c(10L, ncol(t4_CS), nruns),
dimnames = list(NULL, colnames(t4_CAS), NULL)
)for (run in seq_len(nruns)) {
# reset populations
$reset(populations)
SMM_base_PSA$reset(populations)
SMM_CAS_PSA# find unique modvars and randomise them
<- unique(c(
mv $modvars(),
SMM_base_PSA$modvars(),
SMM_CAS_PSA
CAS_effect_lognorm
))for (m in mv) {
$set("random")
m
}# set the transition matrix, applying the CAS effect for CAS model
<- dirichletify(Pt, population = 1000L)
pt_cs $set_probabilities(pt_cs)
SMM_base_PSA<- txeffect(pt_cs, CAS_effect_lognorm$get())
pt_cas $set_probabilities(pt_cas)
SMM_CAS_PSA# cycle the CS model
<- SMM_base_PSA$cycles(
mtrace ncycles = 120L, hcc.pop = FALSE, hcc.cost = FALSE
)<- as_table4(as.matrix(mtrace))
t4 <- t4
t4_CS_PSA[, , run] # cycle the CAS model
<- SMM_CAS_PSA$cycles(
mtrace ncycles = 120L, hcc.pop = FALSE, hcc.cost = FALSE
)<- as_table4(as.matrix(mtrace))
t4 <- t4
t4_CAS_PSA[, , run] }
The means of the cohort simulation results should be very similar, but not necessarily identical, to those in Table 4 (and replicated analytically above). Each run of this script is expected to yield a slightly different result. The addition of probabilistic elements can be used to describe not only the means (which should approximate the deterministic cohort model results), but also the distributions of the estimates, giving the uncertainties associated with the two models.
In the following two sections, the mean of all runs are presented as per Table 4 of Dong & Buxton1 and the uncertainties are presented as per their Table 5, with the addition of the upper and lower confidence limits of the mean.
<- apply(t4_CS_PSA, MARGIN = c(1L, 2L), FUN = mean)
t4_CS_PSA_mean ::pander(
panderround = 2L, row.names = FALSE, big.mark = ",",
t4_CS_PSA_mean, justify = "lrrrrrrr", keep.trailing.zeros = TRUE
)
Year | Cumulative serious complication (%) | Cumulative minor complication (%) | Cumulative complex revision (%) | Cumulative simple revision (%) | Cumulative death (%) | Discounted costs (£) | Discounted QALYs |
---|---|---|---|---|---|---|---|
1 | 11.32 | 19.35 | 0.29 | 0.14 | 3.96 | 5,537,058 | 709.5 |
2 | 21.60 | 35.00 | 0.66 | 0.32 | 8.08 | 5,853,356 | 1,364.9 |
3 | 31.41 | 49.93 | 1.10 | 0.53 | 11.98 | 6,152,238 | 1,970.7 |
4 | 40.76 | 64.17 | 1.58 | 0.77 | 15.69 | 6,434,448 | 2,530.9 |
5 | 49.69 | 77.76 | 2.12 | 1.04 | 19.22 | 6,700,732 | 3,049.0 |
6 | 58.21 | 90.74 | 2.69 | 1.32 | 22.57 | 6,951,832 | 3,528.3 |
7 | 66.36 | 103.14 | 3.29 | 1.63 | 25.76 | 7,188,482 | 3,972.0 |
8 | 74.14 | 114.99 | 3.93 | 1.94 | 28.79 | 7,411,399 | 4,382.8 |
9 | 81.59 | 126.32 | 4.58 | 2.27 | 31.68 | 7,621,284 | 4,763.3 |
10 | 88.71 | 137.16 | 5.26 | 2.62 | 34.43 | 7,818,816 | 5,115.8 |
<- c(
fields "Cumulative serious complication (%)",
"Cumulative complex revision (%)",
"Cumulative simple revision (%)"
)<- matrix(
t5_CS data = NA_real_, nrow = length(fields), ncol = 4L,
dimnames = list(fields, c("Mean", "SD", "Q2.5", "Q97.5"))
)for (f in fields) {
"Mean"]] <- mean(t4_CS_PSA[10L, f, ])
t5_CS[[f, "SD"]] <- sd(t4_CS_PSA[10L, f, ])
t5_CS[[f, "Q2.5"]] <- quantile(t4_CS_PSA[10L, f, ], probs = 0.025)
t5_CS[[f, "Q97.5"]] <- quantile(t4_CS_PSA[10L, f, ], probs = 0.975)
t5_CS[[f, }
Mean | SD | Q2.5 | Q97.5 | |
---|---|---|---|---|
Cumulative serious complication (%) | 88.71 | 27.41 | 42.53 | 149.77 |
Cumulative complex revision (%) | 5.26 | 1.93 | 2.39 | 9.58 |
Cumulative simple revision (%) | 2.62 | 1.08 | 1.15 | 5.30 |
<- apply(t4_CAS_PSA, MARGIN = c(1L, 2L), FUN = mean)
t4_CAS_PSA_mean ::pander(
panderround = 2L, row.names = FALSE, big.mark = ",",
t4_CAS_PSA_mean, justify = "lrrrrrrr", keep.trailing.zeros = TRUE
)
Year | Cumulative serious complication (%) | Cumulative minor complication (%) | Cumulative complex revision (%) | Cumulative simple revision (%) | Cumulative death (%) | Discounted costs (£) | Discounted QALYs |
---|---|---|---|---|---|---|---|
1 | 7.42 | 19.36 | 0.19 | 0.11 | 3.96 | 5,654,081 | 711 |
2 | 14.16 | 35.04 | 0.44 | 0.24 | 8.07 | 5,866,556 | 1,368 |
3 | 20.59 | 50.01 | 0.73 | 0.40 | 11.97 | 6,067,925 | 1,975 |
4 | 26.74 | 64.30 | 1.06 | 0.58 | 15.67 | 6,258,604 | 2,537 |
5 | 32.61 | 77.97 | 1.43 | 0.77 | 19.19 | 6,439,022 | 3,056 |
6 | 38.23 | 91.03 | 1.82 | 0.98 | 22.53 | 6,609,611 | 3,537 |
7 | 43.60 | 103.52 | 2.24 | 1.20 | 25.71 | 6,770,805 | 3,982 |
8 | 48.74 | 115.48 | 2.67 | 1.43 | 28.73 | 6,923,034 | 4,394 |
9 | 53.66 | 126.92 | 3.13 | 1.67 | 31.61 | 7,066,721 | 4,776 |
10 | 58.37 | 137.88 | 3.60 | 1.92 | 34.35 | 7,202,282 | 5,130 |
<- c(
fields "Cumulative serious complication (%)",
"Cumulative complex revision (%)",
"Cumulative simple revision (%)"
)<- matrix(
t5_CAS data = NA_real_, nrow = length(fields), ncol = 4L,
dimnames = list(fields, c("Mean", "SD", "Q2.5", "Q97.5"))
)for (f in fields) {
"Mean"]] <- mean(t4_CAS_PSA[10L, f, ])
t5_CAS[[f, "SD"]] <- sd(t4_CAS_PSA[10L, f, ])
t5_CAS[[f, "Q2.5"]] <- quantile(t4_CAS_PSA[10L, f, ], probs = 0.025)
t5_CAS[[f, "Q97.5"]] <- quantile(t4_CAS_PSA[10L, f, ], probs = 0.975)
t5_CAS[[f, }
::pander(
panderround = 2L, justify = "lrrrr", keep.trailing.zeros = TRUE
t5_CAS, )
Mean | SD | Q2.5 | Q97.5 | |
---|---|---|---|---|
Cumulative serious complication (%) | 58.37 | 19.25 | 27.62 | 101.80 |
Cumulative complex revision (%) | 3.60 | 1.34 | 1.60 | 6.57 |
Cumulative simple revision (%) | 1.92 | 0.81 | 0.83 | 3.86 |
Note that the means correlate well with those reported in Table 5, but the assertion that “[CAS] had lower variances for each indicator” is not reliably replicated for every variable. This is likely to be due to the fact that the parameters for the distribution of the Effect of CAS could not be extracted, and the values chosen here may assign excessive uncertainty to this variable.
<- t4_CAS_PSA[10L, "Discounted costs (£)", ] / 1000L -
dcost_psa "Discounted costs (£)", ] / 1000L
t4_CS_PSA[10L, <- t4_CAS_PSA[10L, "Discounted QALYs", ] / 1000L -
dutil_psa "Discounted QALYs", ] / 1000L
t4_CS_PSA[10L, <- dcost_psa / dutil_psa icer_psa
From PSA, after 10 years the mean (95% CI) cost change per patient was -616.53 (-1706.99 to -1) £, and the mean change in utility was 0.01 (-0.07 to 0.08) QALY, compared with the deterministic values of -£583 and +0.0148 respectively, as reported in the paper. The incremental cost effectiveness ratio was less than £30,000 per QALY for 87.6% of runs. The distribution of results on the cost effectiveness plane is shown in the figure below (for comparison with figure 2 of the paper).
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.