Archive for the ‘clinic’ Category

Shrinkage of predicted random effects in logistic regression models

July 30, 2015

As a follow-up of our initial investigation of the bias of random effect predictions in generalized linear mixed regression models, I undertook a limited simulation experiment. In this experiment, we varied the population average complication rate and the associated standard deviation of the individual random effect and generated a small number of replications (N=200) for each combination of population mean and standard deviation. Limited by the computational resources of (the otherwise wonderful!) tablet, I had to forego a more thorough and more precise (larger number of replicates) examination of bias, variance and coverage. The focus is on general patterns rather than precise estimates; by making the code public, these can be obtained by anyone who has access to the R computing environment.

The basic code that simulates  from these logistic regression models is shown below. Although we only examine small cluster sizes (number of observations per cluster 20-100) and a moderate number of individual random effects (200) it is straightforward to modify these aspects of the simulation study. For these simulations we examined a limited number of values for the standard deviation of the random effects (0.1, 0.2 and 0.5) and overall complication rates (0.05, 0.10, 0.15, 0.20) to reflect the potential range of values compatible with the raw data in the Surgeon Scorecard.

library(lme4)
library(mgcv)
## helper functions
logit<-function(x) log(x/(1-x))
invlogit<-function(x) exp(x)/(1+exp(x))

## code allows a sensitivity analysis to the order of the nAGQ
## explanation of symbols/functions appearing inside the
## simulation function
## fit: glmer fit using a X point nAGQ integration rule
## ran: random effects using a X point nAGQ integration rule
##   cR: correlation between fitted and simulated random effect
##       the fitted were obtained with a X point nAGQ integrator
## int: intercept obtained with a X point nAGQ integration rule
## biG: bias of the fitted v.s. the simulated random effects
##      this is obtained as the intercept of a GAM regression of
##      the fitted versus the simulated values
## bi0: bias of the intercept from the glmer fit
## sdR: estimated standard deviation of the random eff distro
## bs0: bias of the standard deviation of the random effects (SDR)
## edf: non-linearity of the slope of a GAM regression of the
##      fitted v.s. the simulated residuals
## slQ: derivative of the non-linear slope at the Qth quantile
simFit<-function(X,complications,ind,pall,SDR,Q=0:10/10) {
fit<-glmer(ev~1+(1|id),data=complications,family=binomial,nAGQ=X)
ran<-ranef(fit)[["id"]][,1] ## predicted (mode) of random effects
cR<-cor(ran,ind)
int<-fixef(fit)
## regress on the residuals to assess performance
fitg<-gam(ran~s(ind,bs="ts")) ## gam fit
edf<-sum(fitg$edf[-1])## edf of the non-linear "slope"
biG<-fitg$coef[1]
bi0<-int-logit(pall)
sdR<-sqrt(as.data.frame(VarCorr(fit))$vcov)
bs0<-sdR-SDR
sQ<-quantile(ind,prob=Q)
slQ<-(predict(fitg,newdata=data.frame(ind=sQ+0.0001),type="terms")[,1]-
predict(fitg,newdata=data.frame(ind=sQ),type="terms")[,1])/0.0001
names(slQ)<-paste("slQ",Q,sep="")
ret<-cbind(int=int,edf=edf,biG=biG,bi0=bi0,bs0=bs0,sdR=sdR,cR=cR,t(slQ),pall=pall,SDR=SDR)
}

## function to simulate cases
simcase<-function(N,p) rbinom(N,1,p)
## simulation scenarios: fixed for each simulation
Nsurgeon<-200; # number of surgeons
Nmin<-20; # min number of surgeries per surgeon
Nmax<-100; # max number of surgeries per surgeon
Nsim<-sample(Nmin:Nmax,Nsurgeon,replace=TRUE); # number of cases per surgeon

## simulate individual surgical performance
## the reality consists of different combos of pall and the sd of the
## random effect

Nscenariosim<-200 ## number of simulations per scenario
scenarios<-expand.grid(iter=1:Nscenariosim,pall=seq(0.05,.20,.05),
sd=c(0.1,0.2,.5))

## simulate indivindual scenarios
simScenario<-function(seed,pall,sd,X,Q)
{
set.seed(seed)
ind<-rnorm(Nsurgeon,0,sd) ; # surgical random effects
logitind<-logit(pall)+ind ; # convert to logits
pind<-invlogit(logitind); # convert to probabilities

complications<-data.frame(ev=do.call(c,mapply(simcase,Nsim,pind,SIMPLIFY=TRUE)),
id=factor(do.call(c,mapply(function(i,N) rep(i,N),1:Nsurgeon,Nsim))))
simFit(X,complications,ind,pall,sd,Q)
}
X<-0
Q=0:10/10

system.time(perf<-mapply(simScenario,scenarios$iter,scenarios$pall,scenarios$sd,
MoreArgs=list(X=X,Q=Q),SIMPLIFY=FALSE))

perf<-do.call(rbind,perf)

The datasets are generated by the function simScenario, which when applied over all combinations of population mean(pall, treated as a fixed effect) and random effect standard deviation (sd) generates the synthetic dataset (‘complications’ variable). Once the data have been generated the function simFit receives the synthetic dataset, fits the mixed logistic regression model and generates random effect predictions.

To assess and quantify shrinkage in these regression models, we compare the predicted random effects against the simulated values. Due to the potentially large number of these effects (10s of thousands in the Surgeon Score Card), we developed a methodology that looks for patterns in these bivariate relationships, taking into account the magnitude of each simulated and predicted random effect pair. This methodology rests on flexible regression between the predicted against the simulated random effect in each of the synthetic datasets. This flexible regression, using smoothing splines, generalizes linear regression and thus avoids the assumption that the shrinkage (as assessed by the slope of the linear regression line) is the same irrespective of the value of the simulated random effect. By numerically differentiating the spline at various quantiles of the simulated random effect, we thus have an estimate of shrinkage. In particular, this estimate gives the change in value of the predicted random effect for a unit change in the simulated one. The relevant code that fits the smoothing spline and differentiates it in (user defined) grid of random effects quantiles is shown below:

</pre>
<pre>fitg<-gam(ran~s(ind,bs="ts")) ## gam fit
edf<-sum(fitg$edf[-1])## edf of the non-linear "slope"
biG<-fitg$coef[1]
bi0<-int-logit(pall)
sdR<-sqrt(as.data.frame(VarCorr(fit))$vcov)
bs0<-sdR-SDR
sQ<-quantile(ind,prob=Q)
slQ<-(predict(fitg,newdata=data.frame(ind=sQ+0.0001),type="terms")[,1]-
predict(fitg,newdata=data.frame(ind=sQ),type="terms")[,1])/0.0001
names(slQ)<-paste("slQ",Q,sep="")

A representative smoothing spline analysis of a mixed logistic regression model is shown in the figure below, along with the corresponding linear fit, and the line signifying zero shrinkage: a straight line passing from the origin with a unit slope. If the latter relationship is observed, then the predicted random effects are unbiased with respect to the simulated ones.

 

Predicted (y-axis) v.s. simulated (x-axis) random effects in a logistic regression model. 2000 random effects were simulated with small cluster sizes (20-100). Shown are the flexible spline fits (green), linear regression fit (red) and the line of no shrinkage (blue)

This figure shows the potential for random effect predictions to be excessively shrunken relative to the simulated ones (note that the blue line is more horizontal relative to the blue one). One should also observe that the amount of shrinkage is not the same throughout the range of the random effects: positive values (corresponding to individuals with higher complication rates) are shrank relatively less (the green line is closer to the blue line), relative to negative random effects (complication rates less than the population average). The net effect of this non-linear relationship is that the separation between “good” (negative RE) and “bad” (positive values of the RE) is decreased in the predicted relative to the simulated random effects. This can be a rather drastic compression in dynamic range for small cluster sizes as we will see below. 

From each mixed logistic regression model we obtain a large number of information : i) the bias in the fixed effect estimate, ii) the bias in the population standard deviation, iii) the bias in the overall relationship between predicted and simulated residuals (indirectly assessing whether the random effects are centered around the fixed effect) iv) the non-linearity of the relationship between the predicted and the simulated random effects (this is given by the estimated degrees of freedom of the spline, with an edf = 1 signifying a linear fit and higher degrees of freedom non linear relationships; edfs that are less than one signify a more complex pattern of shrinkage of a variable proportion of the random effects to a single point, i.e. the fixed effect) v) the linear slope

There was no obvious bias in the estimation of the intercept (the only fixed effect in our simulation study) for various combinations of overall event rate and random effect standard deviation (but note higher dispersion for large standard deviation values, indicating the need for a higher number of simulation replicates in order to improve precision):

Bias in the estimation of the intercept in the mixed logistic regression model (shown in log scale)

Bias in the estimation of the intercept in the mixed logistic regression model (shown in log scale)

 

Similarly there was no bias in the estimation of the population standard deviation of the random effects:

Bias in the standard deviation of the random effects

Bias in the standard deviation of the random effects

The estimated relationship between predicted and simulated residuals was in linear for approximately 50% of simulated samples for small to moderate standard deviations. However it was non-linear (implying differential shrinkage according to random effect magnitude) for larger standard deviations, typical of the standard deviation of the surgeon random effect in the Surgeon Score Card.

Estimated degrees of freedom (a measure of non-linearity) in the relationship between predicted and simulated random effects over simulation replicates.

Estimated degrees of freedom (a measure of non-linearity) in the relationship between predicted and simulated random effects over simulation replicates. The red horizontal line corresponds to a linear relationship between predicted and simulated random effects (edf=1).Edf<1 signify the shrinkage of a variable proportion of random effects to zero

 

The magnitude of the slopes of the smoothing splines at different quartiles of the random effects distribution and for combinations of overall rate and random effect standard deviation are shown below:

Slope of the smoothing spline at different quantiles (20%, 50%, 80%) of the random effect distribution. Boxplots are over 200 replicates of each combination of overall rate and random effect standard deviation

Slope of the smoothing spline at different quantiles (20%, 50%, 80%) of the random effect distribution. Boxplots are over 200 replicates of each combination of overall rate and random effect standard deviation

There are several things to note in this figure:

  1. the amount of shrinkage decreases as the population standard deviation increases (going from left to right in each row of the figure the slopes increase from zero towards one)
  2. the amount of shrinkage decreases as the overall average increases for the same value of the standard deviation (going from top to bottom in the same column)
  3. the degree of shrinkage appears to be the same across the range of random effect quantiles for small to moderate values of the population standard deviation
  4. the discrepancy between the degree of shrinkage is maximized for larger values of the standard deviation of the random effects and small overall rates (top right subpanel). This is the situation that is more relevant for the Surgeon Score Card based on the analyses reported by this project.

What are the practical implications of these observations for the individual surgeon comparison reported in the Score Card project? These are best understood by a numerical example from one of these 200 hundred datasets shown in the top right subpanel. To understand this example one should note that the individual random effects have the interpretation of (log-) odds ratios, irrespective of whether they are predicted or (the simulated) true effects. Hence, the difference in these random effects when exponentiated yield the odds ratio of suffering a complication in the hands of a good relative to a bad surgeon. By comparing these random effects for good and bad surgeons who are equally bad (or good) relative to the mean (symmetric quantiles around the median), one can get an idea of the impact of using the predicted random effects to carry out individual comparisons.

Good Bad Quantile (Good) Quantile (Bad) True OR Pred OR Shrinkage Factor
-0.050 0.050 48.0 52.0 0.905 0.959 1.06
-0.100 0.100 46.0 54.0 0.819 0.920 1.12
-0.150 0.150 44.0 56.0 0.741 0.883 1.19
-0.200 0.200 42.1 57.9 0.670 0.847 1.26
-0.250 0.250 40.1 59.9 0.607 0.813 1.34
-0.300 0.300 38.2 61.8 0.549 0.780 1.42
-0.350 0.350 36.3 63.7 0.497 0.749 1.51
-0.400 0.400 34.5 65.5 0.449 0.719 1.60
-0.450 0.450 32.6 67.4 0.407 0.690 1.70
-0.500 0.500 30.9 69.1 0.368 0.662 1.80
-0.550 0.550 29.1 70.9 0.333 0.635 1.91
-0.600 0.600 27.4 72.6 0.301 0.609 2.02
-0.650 0.650 25.8 74.2 0.273 0.583 2.14
-0.700 0.700 24.2 75.8 0.247 0.558 2.26
-0.750 0.750 22.7 77.3 0.223 0.534 2.39
-0.800 0.800 21.2 78.8 0.202 0.511 2.53
-0.850 0.850 19.8 80.2 0.183 0.489 2.68
-0.900 0.900 18.4 81.6 0.165 0.467 2.83
-0.950 0.950 17.1 82.9 0.150 0.447 2.99
-1.000 1.000 15.9 84.1 0.135 0.427 3.15
-1.050 1.050 14.7 85.3 0.122 0.408 3.33
-1.100 1.100 13.6 86.4 0.111 0.390 3.52
-1.150 1.150 12.5 87.5 0.100 0.372 3.71
-1.200 1.200 11.5 88.5 0.091 0.356 3.92
-1.250 1.250 10.6 89.4 0.082 0.340 4.14
-1.300 1.300 9.7 90.3 0.074 0.325 4.37
-1.350 1.350 8.9 91.1 0.067 0.310 4.62
-1.400 1.400 8.1 91.9 0.061 0.297 4.88
-1.450 1.450 7.4 92.6 0.055 0.283 5.15
-1.500 1.500 6.7 93.3 0.050 0.271 5.44
-1.550 1.550 6.1 93.9 0.045 0.259 5.74
-1.600 1.600 5.5 94.5 0.041 0.247 6.07
-1.650 1.650 4.9 95.1 0.037 0.236 6.41
-1.700 1.700 4.5 95.5 0.033 0.226 6.77
-1.750 1.750 4.0 96.0 0.030 0.216 7.14
-1.800 1.800 3.6 96.4 0.027 0.206 7.55
-1.850 1.850 3.2 96.8 0.025 0.197 7.97
-1.900 1.900 2.9 97.1 0.022 0.188 8.42
-1.950 1.950 2.6 97.4 0.020 0.180 8.89
-2.000 2.000 2.3 97.7 0.018 0.172 9.39
-2.050 2.050 2.0 98.0 0.017 0.164 9.91

From these table it can be seen that predicted odds ratios are always larger than the true ones. The ratio of these odds ratios (the shrinkage factor) is larger, the more extreme comparisons are contemplated.

In summary, the use of the random effects models for the small cluster sizes (number of observations per surgeon) is likely to lead to estimates (or rather predictions) of individual effects that are smaller than their true values. Even though one should expect the differences to decrease with larger cluster sizes, this is unlikely to be observedin real world datasets (how often does one come across a surgeon who has performed 1000s of operation of the same type before they retire?). Furthermore, the comparison of  surgeon performance based on predicted random effects is likely to be misleading due to over-shrinkage.

 

 

 

 

Advertisements

Empirical bias analysis of random effects predictions in linear and logistic mixed model regression

July 30, 2015

In the first technical post in this series, I conducted a numerical investigation of the biasedness of random effect predictions in generalized linear mixed models (GLMM), such as the ones used in the Surgeon Scorecard, I decided to undertake two explorations: firstly, the behavior of these estimates as more and more data are gathered for each individual surgeon and secondly whether the limiting behavior of these estimators critically depends on the underlying GLMM family. Note that the first question directly assesses whether the random effect estimators reflect the underlying (but unobserved) “true” value of the individual practitioner effect in logistic regression models for surgical complications. On the other hand the second simulation examines a separate issue, namely whether the non-linearity of the logistic regression model affects the convergence rate of the random effect predictions towards their true value.

For these simulations we will examine three different ranges of dataset sizes for each surgeon:

  • small data (complication data from between 20-100 cases/ surgeon are available)
  • large data (complications from between 200-1000 cases/surgeon)
  • extra large data (complications from between 1000-2000 cases/surgeon)

We simulated 200 surgeons (“random effects”) from a normal distribution with a mean of zero and a standard deviation of 0.26, while the population average complication rate was set t0 5%. These numbers were chosen to reflect the range of values (average and population standard deviation) of the random effects in the Score Card dataset, while the use of 200 random effects was a realistic compromise with the computational capabilities of the Asus Transformer T100 2 in 1 laptop/tablet that I used for these analyses.

The following code was used to simulate the logistic case for small data (the large and extra large cases were simulated by changing the values of the Nmin and Nmax variables).

library(lme4)
library(mgcv)
## helper functions
logit<-function(x) log(x/(1-x))
invlogit<-function(x) exp(x)/(1+exp(x))

## simulate cases
simcase<-function(N,p) rbinom(N,1,p)
## simulation scenario
pall<-0.05; # global average
Nsurgeon<-200; # number of surgeons
Nmin<-20; # min number of surgeries per surgeon
Nmax<-100; # max number of surgeries per surgeon

## simulate individual surgical performance
## how many simulations of each scenario
set.seed(123465); # reproducibility
ind<-rnorm(Nsurgeon,0,.26) ; # surgical random effects
logitind<-logit(pall)+ind ; # convert to logits
pind<-invlogit(logitind); # convert to probabilities
Nsim<-sample(Nmin:Nmax,Nsurgeon,replace=TRUE); # number of cases per surgeon

complications<-data.frame(ev=do.call(c,mapply(simcase,Nsim,pind,SIMPLIFY=TRUE)),
id=do.call(c,mapply(function(i,N) rep(i,N),1:Nsurgeon,Nsim)))
complications$id<-factor(complications$id)

 

A random effect and fixed effect model were fit to these data (the fixed effect model is simply a series of independent fits to the data for each random effect):


## Random Effects

fit2<-glmer(ev~1+(1|id),data=complications,family=binomial,nAGQ=2)
ran2<-ranef(fit2)[["id"]][,1]
c2<-cor(ran2,ind)
int2<-fixef(fit2)
ranind2<-ran2+int2

## Fixed Effects

fixfit<-vector("numeric",Nsurgeon)
for(i in 1:Nsurgeon) {
fixfit[i]<-glm(ev~1,data=subset(complications,id==i),family="binomial")$coef[1]
}

The corresponding Gaussian GLMM cases were simulated by making minor changes to these codes. These are shown below:


simcase<-function(N,p) rnorm(N,p,1)

fit2<-glmer(ev~1+(1|id),data=complications,nAGQ=2)

fixfit[i]<-glm(ev~1,data=subset(complications,id==i),family="gaussian")$coef[1]

The predicted random effects were assessed against the simulated truth by smoothing regression splines. In these regressions, the intercept yields the bias of the average of the predicted random effects vis-a-vis the truth, while the slope of the regression quantifies the amount of shrinkage effected by the mixed model formulation. For unbiased estimation not only would we like the intercept to be zero, but also the slope to be equal to one. In this case, the predicted random effect would be equal to its true (simulated) value. Excessive shrinkage would result in a slope that is substantially different from one. Assuming that the bias (intercept) is not different from zero, the relaxation of the slope towards one quantifies the consistency and the bias (or rather its rate of convergence) of these estimators using simulation techniques (or so it seems to me).

The use of smoothing (flexible), rather than simple linear regression, to quantify these relationships does away with a restrictive assumption: that the amount of shrinkage is the same throughout the range of the random effects:

## smoothing spline (flexible) fit
fitg<-gam(ran2~s(ind)
## linear regression
fitl<-lm(ran2~ind)

The following figure shows the results of the flexible regression (black with 95% CI, dashed black) v.s. the linear regression (red) and the expected (blue) line (intercept of zero, slope of one).

Predicted v.s. simulated random effects for logistic and linear mixed regression as a function of the number of observations per random effect (cluster size)

Predicted v.s. simulated random effects for logistic and linear mixed regression as a function of the number of observations per random effect (cluster size)

Several observations are worth noting in this figure.
First
, the flexible regression was indistinguishable from a linear regression in all cases; hence the red and black lines overlap. Stated in other terms, the amount of shrinkage was the same across the range of the random effect values.
Second, the intercept in all flexible models was (within machine precision) equal to zero. Consequently, when estimating a group of random effects their overall mean is (unbiasedly) estimated.
Third, the amount of shrinkage of individual random effects appears to be excessive for small sample sizes (i.e. few cases per surgeon). Increasing the number of cases decreases the shrinkage, i.e. the black and red lines come closer to the blue line as N is increased from 20-100 to 1000-2000. Conversely, for small cluster sizes the amount of shrinkage is so excessive that one may lose the ability to distinguish between individuals with very different complication rates. This is reflected by a regression line between the predicted and the simulated random effect value that is nearly horizontal.
Fourth, the rate of convergence of the predicted random effects to their true value critically depends upon the linearity of the regression model. In particular, the shrinkage of logistic regression model with 1000-2000 observations per case is almost the same at that of a linear model with 20-100 for the parameter values considered in this simulation.

An interesting question is whether these observations (overshrinkage of random effects from small sample sizes in logistic mixed regression) reflects the use of random effects in modeling, or whether they are simply due to the interplay between sample size and the non-linearity of the statistical model. Hence, I turned to fixed effects modeling of the same datasets. The results of these analyses are summarized in the following figure:

Difference between fixed effect estimates of random effects(black histograms) v.s. random effects predictions (density estimators: red lines) relative to their simulated (true) values

Difference between fixed effect estimates of random effects(black histograms) v.s. random effects predictions (density estimators: red lines) relative to their simulated (true) values

One notes that the distribution of the differences between the random and fixed effects relative to the true (simulated) values is nearly identical for the linear case (second row). In other words, the use of the implicit constraint of the mixed model, offers no additional advantage when estimating individual performance in this model. On the other hand there is a value in applying mixed modeling techniques for the logistic regression case. In particular, outliers (such as those arising for small samples) are eliminated by the use of random effect modeling. The difference between the fixed and the random effect approach progressively decreases for large sample sizes, implying that the benefit of the latter approach is lost for “extra large” cluster sizes.

One way to put these differences into perspective is to realize that the random effects for the logistic model correspond to log-odd ratios, relative to the population mean. Hence the difference between the predicted random effect and its true value, when exponentiated, corresponds to an Odd Ratio (OR). A summary of the odds ratios over the population of the random effects as a function of cluster size is shown below.


Metric 20-100  200-1000 1000-2000
Min    0.5082   0.6665    0.7938
Q25    0.8901   0.9323    0.9536
Median 1.0330   1.0420    1.0190 
Mean   1.0530   1.0410    1.0300  
Q75    1.1740   1.1340    1.1000   
Max    1.8390   1.5910    1.3160 

 

Even though the average Odds Ratio is close to 1, a substantial number of predicted random effects are far from the true value and yield ORs that are greater than 11% in either direction for small cluster sizes. These observations have implications for the Score Card (or similar projects): if one were to use Random Effects modeling to focus on individuals, then unless the cluster sizes (observations per individual) are substantial, one would run a substantial risk of misclassifying individuals, even though one would be right on average!

One could wonder whether these differences between the simulated truth and the predicted random effects arise as a result of the numerical algorithms of the lme4 package. The latter was used by both the Surgeon Score Card project and our simulations so far and thus it would be important to verify that it performs up to specs. The major tuning variable for the algorithm is the order of the Adaptive Gaussian Quadrature (argument nAGQ). We did not find any substantial departures when the order of the quadrature was varied from 0 to 1 and 2. However, there is a possibility that the algorithm fails for all AGQ orders as it has to calculate probabilities that are numerically close to the boundary of the parameter space. We thus decided to fit the same model from a Bayesian perspective using Markov Chain Monte Carlo (MCMC) methods. The following code will fit the Bayesian model and graphs the true values of the effects used in the simulated dataset against the Bayesian estimates (the posterior mean) and also the lme4 predictions. The latter tend to be numerically close to the posterior mode of the random effects when a Bayesian perspective is adopted.


## Fit the mixed effects logistic model from R using openbugs

library("glmmBUGS")
library(nlme)
fitBUGS = glmmBUGS(ev ~ 1, data=complications, effects="id", family="bernoulli")
startingValues = fitBUGS$startingValues
source("getInits.R")
require(R2WinBUGS)
fitBUGSResult = bugs(fitBUGS$ragged, getInits, parameters.to.save = names(getInits()),
model.file="model.txt", n.chain=3, n.iter=6000, n.burnin=2000, n.thin=10,
program="openbugs", working.directory=getwd())

fitBUGSParams = restoreParams(fitBUGSResult , fitBUGS$ragged)
sumBUGS<-summaryChain(fitBUGSParams )
checkChain(fitBUGSParams )

## extract random effects

cnames<-as.character(sort(as.numeric(row.names(sumBUGS$FittedRid))))
fitBUGSRE<-sumBUGS$Rid[cnames,1]

## plot against the simulated (true) effects and the lme4 estimates

hist(ind,xlab="RE",ylim=c(0,3.8),freq=FALSE,main="")
lines(density(fitBUGSRE),main="Bayesian",xlab="RE",col="blue")
lines(density(ran2),col="red")
legend(legend=c("Truth","lme4","MCMC"),col=c("black","red","blue"),
bty="n",x=0.2,y=3,lwd=1)

The following figure shows the histogram of the true values of the random effects (black), the frequentist(lme4) estimates (red) and the Bayesian posterior means (blue).

 

MCMClme4Truth

It can be appreciated that both the Bayesian estimates and the lme4 predictions demonstrate considerable shrinkage relative to the true values for small cluster sizes (20-100). Hence, an lme4 numerical quirk seems an unlikely explanation for the shrinkage observed in the simulation.

Summing up:

  • Random Effect modeling of binomial outcomes require a substantial number of observations per individual (cluster size) for the procedure to yield estimates of individual effects that are numerically indistinguishable  from the true values
  • Fixed effect modeling is even worse an approach for this problem
  • Bayesian fitting procedures do not appear to yield numerically different effects from their frequentist counterparts

These features should raise the barrier for accepting a random effects logistic modeling approach when the focus is on individual rather than population average effects. Even though the procedure is certainly preferable to fixed effects regression, the direct use of the value of the predicted individual random effects as an effect measure will be problematic for small cluster sizes (e.g. a small number of procedures per surgeon). In particular, a substantial proportion of these estimated effects is likely to be far from the truth even if the model is unbiased on the average. These observations are of direct relevance to the Surgical Score Card in which the number of observations per surgeon were far lower than the average value in our simulations: 60 (small), 600 (large) and 1500 (extra large):

Procedure Code N (procedures) N(surgeons) Procedures per surgeon
51.23 201,351 21,479 9.37
60.5 78,763 5,093 15.46
60.29 73,752 7,898 9.34
81.02 52,972 5,624 9.42
81.07 106,689 6,214 17.17
81.08 102,716 6,136 16.74
81.51 494,576 13,414 36.87
81.54 1,190,631 18,029 66.04
Total 2,301,450 83,887 27.44

Of means, standard deviations and blood pressure trials

December 26, 2013

Last week, the long awaited Joint National Commission (JNC) released their 8th revision of their hypertension management guidelines (more than 10 years after the 7th version!). JNC 8 will join the plethora of guidelines (ASH/ISH, ESH, KDIGO and the ACC/AC advisory that will be upgraded to full guideline set later on) that may be used (or not) to justify (or defend) treating (or not treating) patients with elevated blood pressure (BP) down to pre-approved, committee blessed targets. Though it is not my intention to go into the guidelines (since opinions are like GI tracts: everyone has one, whereas biases are like kidneys: everyone has at least one and usually two), it is interesting to seize this opportunity and revisit one of the largest blood pressure trials that is used to defend certain aspects of the guidelines recommendations.

The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) is one of the largest hypertension trials and was all the rage 12 years ago when failing to meet its primary endpoint i.e. the superiority of various first line antihypertensive agents, it reported differences in various secondary outcomes (including blood pressure control) favoring the oldest (chlorothalidone) v.s. the newest (lisinopril/amlodipine) medication. Since 2002 all agents used in ALLHAT have gone generic (and thus have the same direct costs), diluting the economic rationale for choosing one therapy v.s. the other. Nevertheless the literature reverbated for years (2003 , 2007 v.s. 2009) with intense debates about what ALLHAT showed (or didn’t show). In particular the question of the blood pressure differences observed among the treatment arms was invoked as one of the explanations for the diverging secondary outcomes in ALLHAT. In its simplest form this argument says that the differences of 0.8 (amlodipine)/2 (lisinopril) mmHg of systolic BP over chlorothalidone may account for some of the differences in outcomes.

I was exposed to this argument when I was an internal medicine resident (more than a decade ago) and it did not really make a lot of sense to me: could I be raising the risk for cardiovascular disease, stroke, and heart failure by 10-19% of the patient I was about to see in my next clinic appointment by not going the extra mile to bring his blood pressure by 2 mmHg? Was my medical license, like the 00 in 007’s designation,  a license to kill (rather than heal)? Even after others explained that the figures reported in the paper referred to the difference in the average blood pressure between the treatment arms, not the difference of blood pressure within each patient I could not get to my self to understand the argument. So more than a decade after the publication of ALLHAT I revisited the paper and tried to infer what the actual blood pressures may have had been (no access to individual blood pressure data) during the five years of the study. The relevant information may be found in Figure 3 (with a hidden bonus in Figure 1) of the JAMA paper which are reproduced in a slightly different form below (for copyright reasons):

BPdiffs

The figure shows the Systolic (SBP) and Diastolic (DBP) pressures of the participants who continued to be followed up during the ALLHAT study; unsurprisingly for a randomized study there were almost no differences before the study (T=0). Differences did emerge during follow-up and in fact the mean/average (“dot”) in the graph was lower for chlorothalidone v.s. both amlodipine and lisinopril during the study. However, the standard deviation (the dispersion of the individuals receiving the study drugs around the mean: shown as the height of vertical line) was also larger (by about 17%) for lisinopril suggesting that there both more people with higher and lower blood pressures compared to chlorothalidone.

This pattern is also evident if one bothers to take a look at the flowchart of the study (Figure 1) which lists patterns/reasons for discontinuation during the 5th year. Among the many reasons, “Blood Pressure Elevation” and “Blood Pressure Too Low” are listed:

Chlorothalidone Amlodipine Lisinopril
N (total)
8083 4821 5004
BP High 84 38 131
BP Too Low 91 51 76
Other reasons for discontinuation
1698 963 1192

A garden variety logistic regression for the odds of discontinuation shows there is no difference between amlodipine and chlorothalidone due to high (p=0.16) or low (p=0.74) BP. On the other hand, the comparison betweeen lisinopril and chlorothalidone is more interesting:

  • Odds of discontinuation due to low BP: 1.44 95%CI: 1.00-2.06 (p=0.044)
  • Odds of discontinuation due to high BP: 3.38 95%CI: 2.35-4.87 (p<0.001)

So despite the higher mean, the higher standard deviation implies that there more patients with low (and too low) BP among the recipients of lisinopril. In other words, blood pressure control was more variable with lisinopril compared to the other two drugs: this variability can be quantified by using the reported means/standard deviations to look at the cumulative percentage of patients with BP below a given cutoff (for simplicity we base the calculations on the year 3 data):

BPcum

So it appears that lisinopril (at least as used in ALLHAT) was able to control more patients to < 120/70 (which are low levels based on the current guidelines), but fewer patients at the higher end of the BP spectrum. The clinical translation of this observation is that there will be patients who will be ideal candidates for lisinopril (maybe to the point where dose reduction is necessary) and others who will fail to respond, so that individualization of therapy, rather than one size fits all is warranted. Such individualization may be achieved either on the basis of short shared physician/patient decision making, n of 1 trials, biomarker levels (e.g. home blood pressure measurements) or demographic profiling (as is done in JNC8 for African American patients).

Notwithstanding these comments, one is left scratching one’s head with the following questions:

  • who were the patients with an exaggerated and dampened out response to lisinopril in ALLHAT
  • could the variability in BP control provide a much better explanation for the variability in secondary outcomes in ALLHAT? (the investigators did apply what is known as a time-updated analysis using the observed BPs during the trial, but this is not the statistically proper way to analyze this effect in the presence of loss-to-follow up and possibly informative censoring)
  • what are the clinical implications of lowering BP to a given level when this is done with different classes of agents? This question is related to the previous one and both are not unaswerable with time-updated models of endogenous (such as BP readings) variables

At a more abstract level, should be scrutinize paper tables for the means as well as the standard deviations of response variables looking for hidden patterns that may not evident at a first look? In the clinic one is impressed with the variability of the patient responses to interventions, yet this variability is passed over when analyzing, reporting and discussing trial results in which we only look at the means. To me this is seems a rather deep inconsistency between our behaviours and discourse with our Clinician v.s. our Evidence Basis Medicine hats on, which may even decrease the chances of finding efficacious and effective treatments. Last but certainly not least, how can be begin to acknowledge variability in trial design, execution, analysis and reporting so as to better reflect what actually happens in the physical world, rather than the ivory towers of our statistical simulators?

 

Illussion of Effectiveness in the ‘definitive’ clinical trial

September 2, 2013

The believer’s attitude is one of unconditional trust to the results of the randomized clinical trial (RCT). The latter, not only provides “unbiased” estimates of the relative efficacy of two more therapies, but also furnishes numerical estimates of the absolute efficacy that translate more or less into the outcomes of real world clinical practice. The believer will thus views the results obtained in the clinic as interchangeable with the ones observed in the RCT, so that the mathematically consistent way to jointly examine them is to simply add together the corresponding successes and failures. This approach will work just fine if the underlying premise of equivalency between effectiveness and efficacy is true, yet it will backfire otherwise.

To see why, consider what happens in the hypothetical thought experiment previously outlined:
So for real world experience reflective of a single individual (or even a single practice i.e. 20-100) patients, the magnitude of effectiveness will likely be overstated (since most therapies don’t work as well as advertised in papers). It will take a considerable number of patients (>1000 and likely 10000) to align the believer’s expectations with real world results. 
Such large number of patients from a single condition are unlikely to be encountered in a single individual’s professional lifetime (especially if the condition is rare) so that a believer is stuck in an “evidential blackhole”. Being trapped by the large number of patients (the gravity) of the definitive clinical trial, he or she is forced to discount personal experience for results that are only partially relevant to the patients they actually treat!
Furthermore the believer will substantially underestimate the precision of the estimate; when asked to produce an estimate below which the effectiveness is expected to be found with a small probability e.g. 5% the following figure can be obtained:
Hence even if that physician sounds confident that the therapy works in between 45-50% of patients, this is a gross overestimate and does not even bracket the “true” effectiveness unless the outcomes in alarge number of real world patients are examined:

Four basic attitudes towards efficacy and its relation to effectiveness

September 1, 2013

I will continue this series of posts regarding the appraisal of efficacy (“how well a therapeutic intervention worked in a randomized experiment”) and its translation to statements about effectiveness (“how well the intervention worked in the real world”), by considering the attitudes that one adopt towards these issues. The aim is to develop a sophisticated approach, or rather a vantage point that one would almost always want to adopt when considering the implications of having data about the efficacy (results of a trial) and effectiveness (success rate in real world practice). However the vantage point will only become evident by considering a basic set of attitudes, which are described here: (more…)

Reflections on Effectiveness = Efficacy / 2

August 20, 2013

My recent post on the initial appraisal of a therapy’s effectiveness based on a randomized trial reporting specific data about its efficacy generated some interesting comments on twitter. In particular, David Juurlink (@DavidJuurlink) commented:

@ChristosArgyrop @medskep meant on E=E/2, which I’d say the post-RALES experience with spironolactone invalidates

The point that David made seems to be that that the formula Effectiveness = Efficacy/2 is way too conservative and that the post-RALES experience illustrates this point. This is a great objection, one that lies at the heart of inductive reasoning which is what we essentially do when we speak about either effectiveness or efficacy. To answer this objection (both in its specific post-RALES and its more general form) I will need a couple of posts but first I believe a little bit of background is called for.

RALES was a landmark trial, published almost 15 years ago, about a novel approach (a drug called spironolactone) to treating heart failure, a condition with a very high mortality and hospitalization rate. RALES showed an almost 30% reduction in the risk of death and was a paradigm shifting study: immediately after the publication of RALES predescriptions of spironolactone increased worldwide and in 2013 many of these patients are taking on spironolactone-like.
It is the personal opinion of many cardiologists (and mine) that spironolactone saved the life’s of their real world patients (ie the drug is effective), yet the published track record is not that clear, with partially mixed evaluations of outcomes at least in the elderly and safety concerns (in my opinion,also held by others, almost entirely due to wild extrapolation of study results  inappropriate use and inadequate monitoring by prescribing physicians). It is precisely such considerations that called for further evaluation of the efficacy, effectiveness and safety of the drug almost immediately (in the time scale of clinical research) after the publication of RALES.

So the Effectiveness = Efficacy/2 shorthand formula seems to be vindicated by the track record of spironolactone since RALES. However I would go even further and claim that the drug would have not missed on its potential to save lives in the real world and be more widely used today had this viewpoint been adopted from the outset.

To see why note that the rule has a companion concerning what I call the ‘sail-through rate’: the proportion of real-world patients who will not experience an adverse event while taking the therapy. The healthy skeptic who is using the trial data and nothing else should also expect the sail-through rate to be half the one reported in the trial publication.
Hence, the combination of these too evaluations (which are really flipsides of the same mathematical coin) might have led a more cautious adoption of spironolactone as physicians would have halved their initial expectations about the benefit and the lack of trouble. What happened is that the prescriptions increased by 700% and complications (mainly hyperkalemia and renal failure/insufficiency/injury) sky-rocketed as physicians held out the scripts in expectation of benefit for their patients. The skeptic approach might have led one to become more familiar with the drug’s pharmacological and safety profile. This could have taken for example to one spending some quality time with a clinical pharmacology textbook to refresh one’s memory. Even better, one could adopt the dosing/monitoring protocol used by the randomized trial trying to reproduce the results in his or her practice. That physician can probably be much less conservative in his or her assessment of effectiveness. Rather than saying that the effectiveness (proportion of responders) can be any number between 0 and trial efficacy (a heuristic with which to understand the mathematics behind the rule) this physician even expect that the trial data will reflect the real world experience.

This is a crucial point and one that is rarely made: one of the causes of the apparent failure of increased efficacy to translate to increased effectiveness has to do with contextual elements that are not adopted in the real world. For spironolactone this requires pre-treatment screening and frequent monitoring of renal function and potassium levels, as the post-RALES publication record reveals.

In summary I feel that the mathematics of skepticism were in fact validated by RALES. However one would like to do better and answer the objection of David in the general case: at what point and how do we dispense with the skepticism? When is initial skepticism justified? How do we combine real word experience with expectations based on trial data and a priori beliefs about a given therapy’s benefits to inform our knowledge and advice our patients?

These will be covered in follow up posts.

Medical diagnosis is a choice among narratives

June 22, 2013

The availability of narratives i.e. experience-based, organized stories about the world,  is an important tool allowing us to navigate our everyday tasks. During clinical work in particular, selecting one among many narratives is a very common task and one fraught with risk for physicians and patients alike. Consider the following, extremely common, scenario in my practice: a patient walks in a physician’s office with an elevated reading of his or hers Blood Pressure (BP) asking for an evaluation. The physician’s task at this point is two-fold:

  • first, to confirm the presence of elevated BP (giving the patient the diagnosis of a hypertensive condition)
  • second, to decide whether the patient has one of the many known disease states causing high BP (this is known as Secondary Hypertension)  or finding no known cause, give the patient the diagnosis of Essential/Idiopathic (“don’t know what is going on”) Hypertension

(more…)