## Archive for the ‘R’ Category

### Sequential Fitting Strategies For Models of short RNA Sequencing Data

June 18, 2017

After a (really long!) hiatus I am reactivating my statistical blog. The first article  concerns the clarification of a point made in the manual of our recently published statistical model for short RNA sequencing data.
The background for this post, in case one wants to skip reading the manuscript (please do read it !), centers around the limitations of existing methods for the analysis of data for this very promising class of biomarkers. To overcome these limitations our group comprised from investigators from Division of Nephrology, University of New Mexico and the Galas Lab at Pacific Northwest Research Institute introduced a novel method for the analysis of short RNA sequencing (sRNAseq) data. This method (RNAseqGAMLSS), which was derived from first principles modeling of the short RNAseq process, was shown to have a number of desirable properties in an analysis of nearly 200 public and internal datasets:

1. It can quantify the intrinsic, sequencing specific bias of sRNAseq from calibration, synthetic equimolar mixes of the target short RNAs (e.g. microRNAs)
2.  It can use such estimates to correct for the bias present in experimental samples of different composition and input than the calibration runs. This in turns opens the way for the use of short RNAseq measurements in personalized medicine applications (as explained here)
3. Adapted to the problem of differential expression analysis, our method exhibited  greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of short RNA-seq data.
4. In contrast to these popular methods which force the expression profiles to have a certain form of symmetry (equal number and magnitude of over-expressed and under-expressed sequences), our method can be used to discover global, directional changes in expression profiles which are missed by the aforementioned methods. Accounting for such a possibility may be appropriate in certain instances, in which the disease process leads to loss or gain in the number of cells of origin of the affected organ/tissue.

The proposed methodology which is based on Generalized Additive Models for Location, Scale and Shape (GAMLSS) involves the fitting of simultaneous regression models for the location (mean) and the scale (dispersion) of sequence counts using either the Negative Binomial or a particular parameterization of the Normal distribution. However there is price to pay for the advantages of RNASeqGAMLSS over alternatives: this comes in the form of a small (but not infinitesimal) probability[1] that the fitting algorithm will execute successfully. In the manual of our method (Section 6.4) we explain that a numerically more stable way of fitting these complex models exists and should be adapted if one encounters numerical optimization errors with the default approach used in the Nucleic Acids Research (NAR) manuscript. The three steps of this sequential fitting strategy are as follows:

1. One fits a Poisson count mixed model to the RNAseq data, to obtain estimates of the relative mean expression for the different short RNA species in the expression profile
2. These estimates are used to fix the values of the mean parameter model of the RNASeqGAMLSS model while estimating the values of the dispersion parameters.
3. Finally, one uses the values of the mean (Step 1) and dispersion (Step 2) parameters to fit the general RNASeqGAMLSS model

In essence one ignores the overdispersion (additional variability) of the short RNAseq data (Step 1) to guide the algorithm into estimates of the dispersion parameters (Step 2). Finally one uses the separate estimates of the mean (Step 1) and dispersion (Step 2) parameters as an initial point for the simultaneous estimation of both (Step 3). The reason that this approach works is because the dispersion parameters do not affect the mean parameters, so that the Poisson distribution of Step 1 has the same mean as the RNASeqGAMLSS model. Hence the estimates produced by this Step are identical (to the limit of numerical precision) to those that would have been produced by a successful execution of the RNASeqGAMLSS optimization algorithms. Fixing these values when fitting the RNASeqGAMLSS model in Step 2 facilitates estimation of the dispersion parameters. Having very good initial guesses for these parameters virtually guarantees convergence of the 3rd Step (which is the only step in the NAR paper).

A fully worked out example is shown below (note that the data used in the NAR paper, source code, manual that includes instructions to compile the source code of the RNASeqGAMLSS and Win64 DLL libraries are all available in the BitBucket repository for this project)

First, we load the data, the C++ libraries and extract the data to the two groups we would like to compare :

library(TMB) ## the TMB framework for optimizati
library(lme4)
## load the DE C++ libraries
## Note about the form of data storage for use with this software
##================================================================================
## the long format should be employed when storing microRNA data for
## GAMLSS type of analyses: in this format, the data are arranged in columns:
## - column miRNA : yields the name of the miRNA
## - column reads : reads of the miRNA from a *single sample*
## - column SampleID: the sample the reads were recorded at
## this is called long format, because data from experiments are stored in
## different rows (rather than appearing as consecutive columns)
##================================================================================
## lads the data from the 286 series
## Obtain data for GAMLSS - we will compare the two ratiometric series
datRat<-subset(dat286.long,(Series=="RatioB" | Series =="RatioA") & Amount=="100 fmoles")
datRat$SampleID<-factor(datRat$SampleID)
datRat$Series<-factor(datRat$Series)

## STEP 0: PREPARE THE DATA FOR THE RNASeqGAMLSS FIT
u_X<-as.numeric(factor(datRat$miRNA)) ## maps readings to the identity of the miRNA u_G<-as.numeric(factor(datRat$Series)) ## maps counts to group
y=datRat$reads ## extract the actual counts X<-model.matrix(~Series,data=datRat) ## design matrix (ANOVA) for group comparisons  Secondly, we fit the Poisson model (Step 1), using the facilities of the lme4 R package: ## STEP 1: USE A POISSON MODEL TO OBTAIN ESTIMATES FOR THE MU SUBMODEL ##========================================================================================== ## fit the parameters for the mu submodel using the poisson GLM gl<-glmer(reads~Series+(0+Series|miRNA),data=datRat,family="poisson")  Then we extract the values of these parameters and used them to fix the values of the mean submodel (Step 2): ## STEP 2: USE THE MU MODEL ESTIMATES TO FIT THE PHI SUBMODEL ##========================================================================================== ## initializes standard deviation of RE for the mu submodel sigmu=sqrt(diag((summary(gl)[["varcor"]])[[1]])) sigsig=rep(1,max(u_G)) ## initializes standard deviation of RE for the phi submodel b=fixef(gl) ## initial values for the overall group means (mean submodel) ## initial values for the variation of miRNAs around their group mean (mean submodel) u_m=as.matrix(ranef(gl)$miRNA)
## Very rough initial values for the phi submodel parameters
s_b=rep(0,ncol(X)) ## initial values for the overall group means (phi submodel)
## initial values for the variation of miRNAs around their group mean (phi submodel)
u_s= matrix(0,max(u_X),max(u_G))
## MAP list that allow us to fix some parameters to their values
MAP<-NULL
MAP[["b"]]<-factor(rep(NA,length(b)))
MAP[["u_m"]]<-factor(rep(NA,length(c(u_m))))
MAP[["sigmu"]]<-factor(rep(NA,length(sigmu)))
## construct the AD object - note that we fix the mu at their values while estimating the
## phi submodel
parameters=list(b=b,s_b=s_b,u_m=u_m,u_s=u_s,
sigmu=sigmu,sigsig=sigsig),
DLL="LQNO_DE",random=c("u_s"),hessian=FALSE,silent=TRUE,
method="BFGS",random.start=expression(last.par.best[random]),
## parameter estimation - note errors may be generated during some iterations
f.TMB<-nlminb(obj.TMB$par,obj.TMB$fn,obj.TMB$gr, control=list(eval.max=10000,iter.max=10000),lower=-30,upper=30) ## obtain the report on the parameters to extract the fitted values of the gamlss model rep<-sdreport(obj.TMB) u_s = matrix(summary(rep,"random",p.value=FALSE)[,1],ncol=max(u_G)) dummy<-summary(rep,"fixed",p.value=FALSE)[,1] ## parameter estimates s_b=dummy[1:max(u_G)] sigsig=dummy[-(1:max(u_G))]  Finally, we refit the model letting all parameters vary:  ## STEP 3: REFIT THE MODEL WITHOUT FIXING ANY PARAMETERS ##========================================================================================== obj.TMB<-MakeADFun(data=list(y=y,X=X,u_X=u_X,u_G=u_G), parameters=list(b=b,s_b=s_b,u_m=u_m,u_s=u_s, sigmu=sigmu,sigsig=sigsig), DLL="LQNO_DE",random=c("u_m","u_s"),hessian=TRUE,silent=TRUE, method="BFGS",random.start=expression(last.par.best[random]), ADReport=TRUE) ## scale objective by the magnitude of the deviance of the fitted Poisson model f.TMB<-nlminb(obj.TMB$par,obj.TMB$fn,obj.TMB$gr,
control=list(eval.max=10000,iter.max=10000,scale=deviance(gl)),lower=-30,upper=30)
## obtain the report on the parameters
rep<-sdreport(obj.TMB)
## differential expression ratios, standard errors z and p-values
gamlssAD<-summary(rep,"report",p.value=TRUE)[1:nlevels(datRat$miRNA),] rownames(gamlssAD)<-levels(datRat$miRNA)
## rownames are the miRNAs; columns are estimates, standard error, z value and pvalue

## the final estimates with their standard errors



These steps (and the RNASeqGAMLSS code) is going to be incorporated in an upcoming Bioconductor package for the analysis of short RNA sequencing data by Dr Lorena Pantano PhD. Until this package becomes available, the aforementioned code snippets may adapted very easily to one’s application by suitable adaptations of the code (i.e. the names of the columns corresponding to the RNA species, sample identifiers and experimental groups).

1. This is particularly likely when the underlying software implementations are not compiled against the Intel®Math Kernel Libraries.

### Estimating the mean and standard deviation from the median and the range

December 3, 2015

While preparing the data for a meta-analysis, I run into the problem that a few of my sources did not report the outcome of interest as means and standard deviations, but rather as medians and range of values. After looking around, I found this interesting paper which derived (and validated through simple simulations), simple formulas that can be used to convert the median/range into a mean and a variance in a distribution free fashion.  With

• a = min of the data
• b = max of the data
• m = median
• n = size of the sample

the formulas are as follows:

Mean  $\bar{m} = \frac{a+2 m+b}{4} +\frac{a-2 m+b}{4 n}$

Variance  $\frac{1}{n-1} \Big(a^2+m^2+b^2+\frac{n-3}{2} \frac{(a+m)^2+(b+m)^2}{4}-n \bar{m} \Big)$

The following R function will carry out these calculations

f<-function(a,m,b,n)
{
mn<-(a+2*m+b)/4+(a-2*m+b)/(4*n)
s=sqrt((a*a+m*m+b*b+(n-3)*((a+m)^2+(m+b)^2)/8-n*mn*mn)/(n-1))
c(mn,s)
}

Edit

Surfing around arxiv, I found another paper that handles additional scenarios and proposes alternative formulas

July 30, 2015

## The Surgeon Scorecard

Two weeks ago, the world of medical journalism was rocked by the public release of ProPublica’s Surgeon Scorecard. In this project ProPublica “calculated death and complication rates for surgeons performing one of eight elective procedures in Medicare, carefully adjusting for differences in patient health, age and hospital quality.”  By making the dataset available through a user friendly portal, the intended use of this public resource was to “use this database to know more about a surgeon before your operation“.

The criticism to date has largely focused on the potential for selection effects (as the Scorecard is based on Medicare data, and does not include data from private insurers), the incomplete adjustment for confounders, the paucity of data for individual surgeons, the counting of complications and re-admission rates, decisions about risk category classification boundaries and even data errors (ProPublica’s response arguing that the Scorecard matters may be found here). With a few exceptions (e.g. see Schloss’s blogpost in which the complexity of the statistical approach is mentioned) the criticism of the statistical approach (including my own comments in twitter) has largely focused on these issues.

On the other hand, the underlying statistical methodology (here and there) that powers the Scorecard has not received much attention. Therefore I undertook a series of simulation experiments to explore the impact of the statistical procedures on the inferences afforded by the Scorecard.

## The mixed model that could – a short non-technical summary of ProPublica’s approah

ProPublica’s approach to the scorecard is based on logistic regression model, in which individual surgeon (and hospital) performance (probability of suffering a complication) is modelled using Gaussian random effects, while patient level characteristics that may act as confounders are adjusted for, using fixed effects. In a nutshell this approach implies fitting a model of the average complication rate that is function of the fixed effects (e.g. patient age) for the entire universe of surgeries  performed in the USA. Individual surgeon and hospital factors modify this complication rate, so that a given surgeon and hospital will have an individual  rate that varies around the population average. These individual surgeon and hospital factors are constrained to follow Gaussian, bell-shaped distribution when analyzing complication data. After model fitting, these predicted random effects are used to quantify and compare surgical performance. A feature of mixed modeling approaches is the unavoidable shrinkage of the raw complication rate towards the population mean. Shrinkage implies that the dynamic range of the  actually observed complication rates is compressed. This is readily appreciated in the figures generated by the ProPublica analytical team:

In their methodological white paper the ProPublica team notes:

While raw rates ranged from 0.0 to 29%, the Adjusted Complication Rate goes from 1.1 to 5.7%. …. shrinkage is largely a result of modeling in the first place, not due to adjusting for case mix. This shrinkage is another piece of the measured approach we are taking: we are taking care not to unfairly characterize surgeons and hospitals.”

These features should alert us that something is going on. For if a model can distort the data to such a large extent, then the model should be closely scrutinized before being accepted. In fact, given these observations,  it is possible that one mistakes the noise from the model for the information hidden in the empirical data. Or, even more likely, that one is not using the model in the most productive manner.

Note that these comments  should not be interpreted as a criticism against the use of mixed models in general, or even for the particular aspects of the Scorecard project. They are rather a call for re-examining the modeling assumptions and for gaining a better understanding of the model “mechanics of prediction” before releasing the Kraken to the world.

## The little mixed model that shouldn’t

There are many technical aspects one could potentially misfire in a Generalized Linear Mixed Model for complication rates. Getting the wrong shape of the random effects distribution is of may or may not be of concern (e.g. assuming it is bell shaped when it is not). Getting the underlying model wrong, e.g. assuming the binomial model for complication rates while a model with many more zeros (a zero inflated model) may be more appropriate, is yet another potential problem area. However, even if  these factors are not operational, then one may still be misled when using the results of the model. In particular, the major area of concern for such models is the cluster size: the number of observations per individual random effect (e.g. surgeon) in the dataset. It is this factor, rather than the actual size of the dataset that determines the precision in the individual random affects. Using a toy example, we show that the number of observations per surgeon typical of the Scorecard dataset, leads to predicted  random effects that may be far from their true value. This seems to stem from the non-linear nature of the logistic regression model. As we conclude in our first technical post:

• Random Effect modeling of binomial outcomes require a substantial number of observations per individual (in the order of thousands) for the procedure to yield estimates of individual effects that are numerically indistinguishable  from the true values.

Contrast this conclusion to the cluster size in the actual scorecard:

 Procedure Code N (procedures) N(surgeons) Procedures per surgeon 51.23 201,351 21,479 9.37 60.5 78,763 5,093 15.46 60.29 73,752 7,898 9.34 81.02 52,972 5,624 9.42 81.07 106,689 6,214 17.17 81.08 102,716 6,136 16.74 81.51 494,576 13,414 36.87 81.54 1,190,631 18,029 66.04 Total 2,301,450 83,887 27.44

In a follow up simulation study we demonstrate that this feature results in predicted individual effects that are non-uniformly shrank towards their average value. This compromises the ability of mixed model predictions to separate the good from the bad “apples”.

In the second technical post, we undertook a simulation study to understand the implications of over-shrinkage for the Scorecard project. These are best understood by a numerical example from one of simulated datasets. To understand this example one should note that the individual random effects have the interpretation of (log-) odds ratios. Hence, the difference in these random effects when exponentiated yield the odds ratio of suffering a complication in the hands of a good relative to a bad surgeon. By comparing these random effects for good and bad surgeons who are equally bad (or good) relative to the mean (symmetric quantiles around the median), one can get an idea of the impact of using the predicted random effects to carry out individual comparisons.

 Good Bad Quantile (Good) Quantile (Bad) True OR Pred OR Shrinkage Factor -0.050 0.050 48.0 52.0 0.905 0.959 1.06 -0.100 0.100 46.0 54.0 0.819 0.920 1.12 -0.150 0.150 44.0 56.0 0.741 0.883 1.19 -0.200 0.200 42.1 57.9 0.670 0.847 1.26 -0.250 0.250 40.1 59.9 0.607 0.813 1.34 -0.300 0.300 38.2 61.8 0.549 0.780 1.42 -0.350 0.350 36.3 63.7 0.497 0.749 1.51 -0.400 0.400 34.5 65.5 0.449 0.719 1.60 -0.450 0.450 32.6 67.4 0.407 0.690 1.70 -0.500 0.500 30.9 69.1 0.368 0.662 1.80 -0.550 0.550 29.1 70.9 0.333 0.635 1.91 -0.600 0.600 27.4 72.6 0.301 0.609 2.02 -0.650 0.650 25.8 74.2 0.273 0.583 2.14 -0.700 0.700 24.2 75.8 0.247 0.558 2.26 -0.750 0.750 22.7 77.3 0.223 0.534 2.39 -0.800 0.800 21.2 78.8 0.202 0.511 2.53 -0.850 0.850 19.8 80.2 0.183 0.489 2.68 -0.900 0.900 18.4 81.6 0.165 0.467 2.83 -0.950 0.950 17.1 82.9 0.150 0.447 2.99 -1.000 1.000 15.9 84.1 0.135 0.427 3.15 -1.050 1.050 14.7 85.3 0.122 0.408 3.33 -1.100 1.100 13.6 86.4 0.111 0.390 3.52 -1.150 1.150 12.5 87.5 0.100 0.372 3.71 -1.200 1.200 11.5 88.5 0.091 0.356 3.92 -1.250 1.250 10.6 89.4 0.082 0.340 4.14 -1.300 1.300 9.7 90.3 0.074 0.325 4.37 -1.350 1.350 8.9 91.1 0.067 0.310 4.62 -1.400 1.400 8.1 91.9 0.061 0.297 4.88 -1.450 1.450 7.4 92.6 0.055 0.283 5.15 -1.500 1.500 6.7 93.3 0.050 0.271 5.44 -1.550 1.550 6.1 93.9 0.045 0.259 5.74 -1.600 1.600 5.5 94.5 0.041 0.247 6.07 -1.650 1.650 4.9 95.1 0.037 0.236 6.41 -1.700 1.700 4.5 95.5 0.033 0.226 6.77 -1.750 1.750 4.0 96.0 0.030 0.216 7.14 -1.800 1.800 3.6 96.4 0.027 0.206 7.55 -1.850 1.850 3.2 96.8 0.025 0.197 7.97 -1.900 1.900 2.9 97.1 0.022 0.188 8.42 -1.950 1.950 2.6 97.4 0.020 0.180 8.89 -2.000 2.000 2.3 97.7 0.018 0.172 9.39 -2.050 2.050 2.0 98.0 0.017 0.164 9.91

From this table it can be seen that predicted odds ratios are always larger than the true ones. The ratio of these odds ratios (the shrinkage factor) is larger, the more extreme comparisons are contemplated.

In summary, the use of the random effects models for the small cluster sizes (number of observations per surgeon) is likely to lead to estimates (or rather predictions) of individual effects that are smaller than their true values. Even though one should expect the differences to decrease with larger cluster sizes, this is unlikely to happen in real world datasets (how often does one come across a surgeon who has performed 1000s of operation of the same type before they retire?). Hence, the comparison of  surgeon performance based on these random effect predictions is likely to be misleading due to over-shrinkage.

## Where to go from here?

ProPublica should be congratulated for taking up such an ambitious, and ultimately useful project. However, the limitations of the adopted approach should make one very skeptical about accepting the inferences from their modeling tool. In particular, the small number of observations per surgeon limits the utility of the predicted random effects to directly compare surgeons due to over-shrinkage. Further studies are required before one could use the results of mixed effects modeling for this application. Based on some limited simulation experiments (that we do not present here), it seems that relative rankings of surgeons may be robust measures of surgical performance, at least compared to the absolute rates used by the Scorecard. Adding my voice to that of Dr Schloss, I think it is time for an open and transparent dialogue (and possibly a “crowdsourced” research project) to better define the best measure of surgical performance given the limitations of the available data. Such a project could also explore other directions, e.g. the explicit handling of zero inflation and even go beyond the ubiquitous bell shaped curve. By making the R code available, I hope that someone (possibly ProPublica) who can access more powerful computational resources can perform more extensive simulations. These may better define other aspects of the modeling approach and suggest improvements in the scorecard methodology. In the meantime, it is probably a good idea not to exclusively rely on the numerical measures of the scorecard when picking up the surgeon who will perform your next surgery.

### Empirical bias analysis of random effects predictions in linear and logistic mixed model regression

July 30, 2015

In the first technical post in this series, I conducted a numerical investigation of the biasedness of random effect predictions in generalized linear mixed models (GLMM), such as the ones used in the Surgeon Scorecard, I decided to undertake two explorations: firstly, the behavior of these estimates as more and more data are gathered for each individual surgeon and secondly whether the limiting behavior of these estimators critically depends on the underlying GLMM family. Note that the first question directly assesses whether the random effect estimators reflect the underlying (but unobserved) “true” value of the individual practitioner effect in logistic regression models for surgical complications. On the other hand the second simulation examines a separate issue, namely whether the non-linearity of the logistic regression model affects the convergence rate of the random effect predictions towards their true value.

For these simulations we will examine three different ranges of dataset sizes for each surgeon:

• small data (complication data from between 20-100 cases/ surgeon are available)
• large data (complications from between 200-1000 cases/surgeon)
• extra large data (complications from between 1000-2000 cases/surgeon)

We simulated 200 surgeons (“random effects”) from a normal distribution with a mean of zero and a standard deviation of 0.26, while the population average complication rate was set t0 5%. These numbers were chosen to reflect the range of values (average and population standard deviation) of the random effects in the Score Card dataset, while the use of 200 random effects was a realistic compromise with the computational capabilities of the Asus Transformer T100 2 in 1 laptop/tablet that I used for these analyses.

The following code was used to simulate the logistic case for small data (the large and extra large cases were simulated by changing the values of the Nmin and Nmax variables).

library(lme4)
library(mgcv)
## helper functions
logit<-function(x) log(x/(1-x))
invlogit<-function(x) exp(x)/(1+exp(x))

## simulate cases
simcase<-function(N,p) rbinom(N,1,p)
## simulation scenario
pall<-0.05; # global average
Nsurgeon<-200; # number of surgeons
Nmin<-20; # min number of surgeries per surgeon
Nmax<-100; # max number of surgeries per surgeon

## simulate individual surgical performance
## how many simulations of each scenario
set.seed(123465); # reproducibility
ind<-rnorm(Nsurgeon,0,.26) ; # surgical random effects
logitind<-logit(pall)+ind ; # convert to logits
pind<-invlogit(logitind); # convert to probabilities
Nsim<-sample(Nmin:Nmax,Nsurgeon,replace=TRUE); # number of cases per surgeon

complications<-data.frame(ev=do.call(c,mapply(simcase,Nsim,pind,SIMPLIFY=TRUE)),
id=do.call(c,mapply(function(i,N) rep(i,N),1:Nsurgeon,Nsim)))
complications$id<-factor(complications$id)



A random effect and fixed effect model were fit to these data (the fixed effect model is simply a series of independent fits to the data for each random effect):


## Random Effects

fit2<-glmer(ev~1+(1|id),data=complications,family=binomial,nAGQ=2)
ran2<-ranef(fit2)[["id"]][,1]
c2<-cor(ran2,ind)
int2<-fixef(fit2)
ranind2<-ran2+int2

## Fixed Effects

fixfit<-vector("numeric",Nsurgeon)
for(i in 1:Nsurgeon) {
fixfit[i]<-glm(ev~1,data=subset(complications,id==i),family="binomial")$coef[1] }  The corresponding Gaussian GLMM cases were simulated by making minor changes to these codes. These are shown below:  simcase<-function(N,p) rnorm(N,p,1) fit2<-glmer(ev~1+(1|id),data=complications,nAGQ=2) fixfit[i]<-glm(ev~1,data=subset(complications,id==i),family="gaussian")$coef[1]



The predicted random effects were assessed against the simulated truth by smoothing regression splines. In these regressions, the intercept yields the bias of the average of the predicted random effects vis-a-vis the truth, while the slope of the regression quantifies the amount of shrinkage effected by the mixed model formulation. For unbiased estimation not only would we like the intercept to be zero, but also the slope to be equal to one. In this case, the predicted random effect would be equal to its true (simulated) value. Excessive shrinkage would result in a slope that is substantially different from one. Assuming that the bias (intercept) is not different from zero, the relaxation of the slope towards one quantifies the consistency and the bias (or rather its rate of convergence) of these estimators using simulation techniques (or so it seems to me).

The use of smoothing (flexible), rather than simple linear regression, to quantify these relationships does away with a restrictive assumption: that the amount of shrinkage is the same throughout the range of the random effects:

## smoothing spline (flexible) fit
fitg<-gam(ran2~s(ind)
## linear regression
fitl<-lm(ran2~ind)


The following figure shows the results of the flexible regression (black with 95% CI, dashed black) v.s. the linear regression (red) and the expected (blue) line (intercept of zero, slope of one).

Predicted v.s. simulated random effects for logistic and linear mixed regression as a function of the number of observations per random effect (cluster size)

Several observations are worth noting in this figure.
First
, the flexible regression was indistinguishable from a linear regression in all cases; hence the red and black lines overlap. Stated in other terms, the amount of shrinkage was the same across the range of the random effect values.
Second, the intercept in all flexible models was (within machine precision) equal to zero. Consequently, when estimating a group of random effects their overall mean is (unbiasedly) estimated.
Third, the amount of shrinkage of individual random effects appears to be excessive for small sample sizes (i.e. few cases per surgeon). Increasing the number of cases decreases the shrinkage, i.e. the black and red lines come closer to the blue line as N is increased from 20-100 to 1000-2000. Conversely, for small cluster sizes the amount of shrinkage is so excessive that one may lose the ability to distinguish between individuals with very different complication rates. This is reflected by a regression line between the predicted and the simulated random effect value that is nearly horizontal.
Fourth, the rate of convergence of the predicted random effects to their true value critically depends upon the linearity of the regression model. In particular, the shrinkage of logistic regression model with 1000-2000 observations per case is almost the same at that of a linear model with 20-100 for the parameter values considered in this simulation.

An interesting question is whether these observations (overshrinkage of random effects from small sample sizes in logistic mixed regression) reflects the use of random effects in modeling, or whether they are simply due to the interplay between sample size and the non-linearity of the statistical model. Hence, I turned to fixed effects modeling of the same datasets. The results of these analyses are summarized in the following figure:

Difference between fixed effect estimates of random effects(black histograms) v.s. random effects predictions (density estimators: red lines) relative to their simulated (true) values

One notes that the distribution of the differences between the random and fixed effects relative to the true (simulated) values is nearly identical for the linear case (second row). In other words, the use of the implicit constraint of the mixed model, offers no additional advantage when estimating individual performance in this model. On the other hand there is a value in applying mixed modeling techniques for the logistic regression case. In particular, outliers (such as those arising for small samples) are eliminated by the use of random effect modeling. The difference between the fixed and the random effect approach progressively decreases for large sample sizes, implying that the benefit of the latter approach is lost for “extra large” cluster sizes.

One way to put these differences into perspective is to realize that the random effects for the logistic model correspond to log-odd ratios, relative to the population mean. Hence the difference between the predicted random effect and its true value, when exponentiated, corresponds to an Odd Ratio (OR). A summary of the odds ratios over the population of the random effects as a function of cluster size is shown below.


Metric 20-100  200-1000 1000-2000
Min    0.5082   0.6665    0.7938
Q25    0.8901   0.9323    0.9536
Median 1.0330   1.0420    1.0190
Mean   1.0530   1.0410    1.0300
Q75    1.1740   1.1340    1.1000
Max    1.8390   1.5910    1.3160



Even though the average Odds Ratio is close to 1, a substantial number of predicted random effects are far from the true value and yield ORs that are greater than 11% in either direction for small cluster sizes. These observations have implications for the Score Card (or similar projects): if one were to use Random Effects modeling to focus on individuals, then unless the cluster sizes (observations per individual) are substantial, one would run a substantial risk of misclassifying individuals, even though one would be right on average!

One could wonder whether these differences between the simulated truth and the predicted random effects arise as a result of the numerical algorithms of the lme4 package. The latter was used by both the Surgeon Score Card project and our simulations so far and thus it would be important to verify that it performs up to specs. The major tuning variable for the algorithm is the order of the Adaptive Gaussian Quadrature (argument nAGQ). We did not find any substantial departures when the order of the quadrature was varied from 0 to 1 and 2. However, there is a possibility that the algorithm fails for all AGQ orders as it has to calculate probabilities that are numerically close to the boundary of the parameter space. We thus decided to fit the same model from a Bayesian perspective using Markov Chain Monte Carlo (MCMC) methods. The following code will fit the Bayesian model and graphs the true values of the effects used in the simulated dataset against the Bayesian estimates (the posterior mean) and also the lme4 predictions. The latter tend to be numerically close to the posterior mode of the random effects when a Bayesian perspective is adopted.


## Fit the mixed effects logistic model from R using openbugs

library("glmmBUGS")
library(nlme)
fitBUGS = glmmBUGS(ev ~ 1, data=complications, effects="id", family="bernoulli")
startingValues = fitBUGS$startingValues source("getInits.R") require(R2WinBUGS) fitBUGSResult = bugs(fitBUGS$ragged, getInits, parameters.to.save = names(getInits()),
model.file="model.txt", n.chain=3, n.iter=6000, n.burnin=2000, n.thin=10,
program="openbugs", working.directory=getwd())

fitBUGSParams = restoreParams(fitBUGSResult , fitBUGS$ragged) sumBUGS<-summaryChain(fitBUGSParams ) checkChain(fitBUGSParams ) ## extract random effects cnames<-as.character(sort(as.numeric(row.names(sumBUGS$FittedRid))))
fitBUGSRE<-sumBUGS$Rid[cnames,1] ## plot against the simulated (true) effects and the lme4 estimates hist(ind,xlab="RE",ylim=c(0,3.8),freq=FALSE,main="") lines(density(fitBUGSRE),main="Bayesian",xlab="RE",col="blue") lines(density(ran2),col="red") legend(legend=c("Truth","lme4","MCMC"),col=c("black","red","blue"), bty="n",x=0.2,y=3,lwd=1)  The following figure shows the histogram of the true values of the random effects (black), the frequentist(lme4) estimates (red) and the Bayesian posterior means (blue). It can be appreciated that both the Bayesian estimates and the lme4 predictions demonstrate considerable shrinkage relative to the true values for small cluster sizes (20-100). Hence, an lme4 numerical quirk seems an unlikely explanation for the shrinkage observed in the simulation. Summing up: • Random Effect modeling of binomial outcomes require a substantial number of observations per individual (cluster size) for the procedure to yield estimates of individual effects that are numerically indistinguishable from the true values • Fixed effect modeling is even worse an approach for this problem • Bayesian fitting procedures do not appear to yield numerically different effects from their frequentist counterparts These features should raise the barrier for accepting a random effects logistic modeling approach when the focus is on individual rather than population average effects. Even though the procedure is certainly preferable to fixed effects regression, the direct use of the value of the predicted individual random effects as an effect measure will be problematic for small cluster sizes (e.g. a small number of procedures per surgeon). In particular, a substantial proportion of these estimated effects is likely to be far from the truth even if the model is unbiased on the average. These observations are of direct relevance to the Surgical Score Card in which the number of observations per surgeon were far lower than the average value in our simulations: 60 (small), 600 (large) and 1500 (extra large):  Procedure Code N (procedures) N(surgeons) Procedures per surgeon 51.23 201,351 21,479 9.37 60.5 78,763 5,093 15.46 60.29 73,752 7,898 9.34 81.02 52,972 5,624 9.42 81.07 106,689 6,214 17.17 81.08 102,716 6,136 16.74 81.51 494,576 13,414 36.87 81.54 1,190,631 18,029 66.04 Total 2,301,450 83,887 27.44 ### Survival Analysis With Generalized Additive Models: Part V (stratified baseline hazards) May 9, 2015 In the fifth part of this series we will examine the capabilities of Poisson GAMs to stratify the baseline hazard for survival analysis. In a stratified Cox model, the baseline hazard is not the same for all individuals in the study. Rather, it is assumed that the baseline hazard may differ between members of groups, even though it will be the same for members of the same group. Stratification is one of the ways that one may address the violation of the proportionality assumption for a categorical covariate in the Cox model. The stratified Cox model resolves the overall hazard in the study as: $h_{g}(t,X) = h_{0_{g}}(t)exp(\boldsymbol{x\beta}) ,\quad g=1,2,\dotsc ,g_{K}$ In the logarithmic scale, the multiplicative model for the stratified baseline hazard becomes an additive one. In particular, the specification of a different baseline hazard for the different levels of a factor amounts to specifying an interaction between the factor and the smooth baseline hazard in the PGAM. We turn to the PBC dataset to provide an example of a stratified analysis with either the Cox model or the PGAM. In that dataset the covariate edema is a categorical variable assuming the values of 0 (no edema), 0.5 (untreated or successfully treated) and 1(edema despite treatment). An analysis of the Schoenfeld residual test shows that this covariate violates the proportionality assumption > f<-coxph(Surv(time,status)~trt+age+sex+factor(edema),data=pbc) > Schoen<-cox.zph(f) > Schoen rho chisq p trt -0.089207 1.12e+00 0.2892 age -0.000198 4.72e-06 0.9983 sexf -0.075377 7.24e-01 0.3950 factor(edema)0.5 -0.202522 5.39e+00 0.0203 factor(edema)1 -0.132244 1.93e+00 0.1651 GLOBAL NA 8.31e+00 0.1400 >  To fit a stratified GAM model, we should transform the dataset to include additional variables, one for each level of the edema covariate. To make the PGAM directly comparable to the stratified Cox model, we have to fit the former without an intercept term. This requires that we include additional dummy variables for any categorical covariates that we would to adjust our model for. In this particular case, the only other additional covariate is the female gender: pbcGAM<-transform(pbcGAM,edema0=as.numeric(edema==0), edema05=as.numeric(edema==0.5),edema1=as.numeric(edema==1), sexf=as.numeric(sex=="f")) Then the stratifed Cox and PGAM models are fit as:  fGAM<-gam(gam.ev~s(stop,bs="cr",by=edema0)+s(stop,bs="cr",by=edema05)+ s(stop,bs="cr",by=edema1)+trt+age+sexf+offset(log(gam.dur))-1, data=pbcGAM,family="poisson",scale=1,method="REML") fs<-coxph(Surv(time,status)~trt+age+sex+strata(edema),data=pbc)  In general the values of covariates of the stratified Cox and the PGAM models are similar with the exception of the trt variable. However the standard error of this variable estimated by either model is so large, that the estimates are statistically no different from zero, despite their difference in magnitude > fs Call: coxph(formula = Surv(time, status) ~ trt + age + sex + strata(edema), data = pbc) coef exp(coef) se(coef) z p trt 0.0336 1.034 0.18724 0.18 0.86000 age 0.0322 1.033 0.00923 3.49 0.00048 sexf -0.3067 0.736 0.24314 -1.26 0.21000 Likelihood ratio test=15.8 on 3 df, p=0.00126 n= 312, number of events= 125 (106 observations deleted due to missingness) > summary(fGAM) Family: poisson Link function: log Formula: gam.ev ~ s(stop, bs = "cr", by = edema0) + s(stop, bs = "cr", by = edema05) + s(stop, bs = "cr", by = edema1) + trt + age + sexf + offset(log(gam.dur)) - 1 Parametric coefficients: Estimate Std. Error z value Pr(>|z|) trt 0.002396 0.187104 0.013 0.989782 age 0.033280 0.009170 3.629 0.000284 *** sexf -0.297481 0.240578 -1.237 0.216262 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(stop):edema0 2.001 2.003 242.0 <2e-16 *** s(stop):edema05 2.001 2.001 166.3 <2e-16 *** s(stop):edema1 2.000 2.001 124.4 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = -0.146 Deviance explained = -78.4% REML score = 843.96 Scale est. = 1 n = 3120</pre>  ### Survival Analysis With Generalized Additive Models : Part IV (the survival function) May 3, 2015 The ability of PGAMs to estimate the log-baseline hazard rate, endows them with the capability to be used as smooth alternatives to the Kaplan Meier curve. If we assume for the shake of simplicity that there are no proportional co-variates in the PGAM regression, then the quantity modeled corresponds to the log-hazard of the survival function. Note that the only assumptions made by the PGAM is that the a) log-hazard is a smooth function, with b) a given maximum complexity (number of degrees of freedom) and c) continuous second derivatives. A PGAM provides estimates of the log-hazard constant, $\beta_{0}$, and the time-varying deviation, $\lambda(t_{i,j})$. These can be used to predict the value of the survival function, $S(t)$, by approximating the integral appearing in the definition of $S(t)$ by numerical quadrature. $S(t_{i})=\exp\left(-\int_{0}^{t_{i}}h(t)\mathrm{\, d}t\right)\approx\exp\left(-\sum_{j=1}^{N_{i}}w_{i,j}\exp(\beta_{0}+\lambda(t_{i,j}))\right)$ From the above definition it is obvious that the value of the survival distribution at any given time point is a non-linear function of the PGAM estimate. Consequently, the predicted survival value, $S_{pred}(t)$, cannot be derived in closed form; as with all non-linear PGAM estimates, a simple Monte Carlo simulation algorithm may be used to derive both the expected value of $\hat{S}_{pred}(t)$ and its uncertainty. For the case of the survival function, the simulation steps are provided in Appendix (Section A3) of our paper. The following R function can be used to predict the survival function and an associated confidence interval at a grid of points. It accepts as arguments a) the vector of time points, b) a PGAM object for the fitted log-hazard function, c) a list with the nodes and weights of a Gauss-Lobatto rule for the integration of the predicted survival, d) the number of Monte Carlo samples to obtain and optionally e) the seed of the random number generation. Of note, the order of the quadrature used to predict the survival function is not necessarily the same as the order used to fit the log-hazard function. ## Calculate survival and confidence interval over a grid of points ## using a GAM SurvGAM<-Vectorize(function(t,gm,gl.rule,CI=0.95,Nsim=1000,seed=0) ## t : time at which to calculate relative risk ## gm : gam model for the fit ## gl.rule : GL rule (list of weights and nodes) ## CI : CI to apply ## Nsim : Number of replicates to draw ## seed : RNG seed { q<-(1-CI)/2.0 ## create the nonlinear contrast pdfnc<-data.frame(stop=t,gam.dur=1) L<-length(gl.rule$x)
start<-0; ## only for right cens data
## map the weights from [-1,1] to [start,t]
gl.rule$w<-gl.rule$w*0.5*(t-start)
## expand the dataset
df<-Survdataset(gl.rule,pdfnc,fu=1)
## linear predictor at each node
Xp <- predict(gm,newdata=df,type="lpmatrix")
## draw samples
set.seed(seed)
br <- rmvn(Nsim,coef(gm),gm$Vp) res1<-rep(0,Nsim) for(i in 1:Nsim){ ## hazard function at the nodes hz<-exp(Xp%*%br[i,]) ## cumumative hazard chz1<-gl.rule$w %*% hz[1:L,]
##survival
res1[i]<-exp(-chz1)
}
ret<-data.frame(t=t,S=mean(res1),
LCI=quantile(res1,prob=q),
UCI=quantile(res1,prob=1-q))
ret

},vectorize.args=c("t"))

The function makes use of another function, Survdataset, that expands internally the vector of time points into a survival dataset. This dataset is used to obtain predictions of the log-hazard function by calling the predict function from the mgcv package.

## Function that expands a prediction dataset
## so that a GL rule may be applied
## Used in num integration when generating measures of effect
Survdataset<-function(GL,data,fu)
## GL  : Gauss Lobatto rule
## data: survival data
##   fu: column number containing fu info

{
## append artificial ID in the set
data$id<-1:nrow(data) Gllx<-data.frame(stop=rep(GL$x,length(data$id)), t=rep(data[,fu],each=length(GL$x)),
id=rep(data$id,each=length(GL$x)),
start=0)
## Change the final indicator to what
## was observed, map node positions,
## weights from [-1,1] back to the
## study time
Gllx<-transform(Gllx,
stop=0.5*(stop*(t-start)+(t+start)))
## now merge the remaining covariate info

Gllx<-merge(Gllx,data[,-c(fu)])
nm<-match(c("t","start","id"),colnames(Gllx))
Gllx[,-nm]
}

The ability to draw samples from the multivariate normal distribution corresponding to the model estimates and its covariance matrix is provided by another function, rmvn:

## function that draws multivariate normal random variates with
## a given mean vector and covariance matrix
##    n : number of samples to draw
##   mu : mean vector
##  sig : covariance matrix
rmvn <- function(n,mu,sig) { ## MVN random deviates
L <- mroot(sig);m <- ncol(L);
t(mu + L%*%matrix(rnorm(m*n),m,n))
}

To illustrate the use of these functions we revisit the PBC example from the 2nd part of this blog series. Firstly, let’s obtain a few Gauss-Lobatto lists of weights/nodes for the integration of the survival function:


## Obtain a few Gauss Lobatto rules to integrate the survival
## distribution
GL5<-GaussLobatto(5);
GL10<-GaussLobatto(10);
GL20<-GaussLobatto(20);

Subsequently, we fit the log-hazard rate to the coarsely (5 nodes) and more finely discretized (using a 10 point Gauss Lobatto rule) versions of the PBC dataset, created in Part 2. The third command obtains the Kaplan Meier estimate in the PBC dataset.


fSurv1<-gam(gam.ev~s(stop,bs="cr")+offset(log(gam.dur)),
data=pbcGAM,family="poisson",scale=1,method="REML")
fSurv2<-gam(gam.ev~s(stop,bs="cr")+offset(log(gam.dur)),
data=pbcGAM2,family="poisson",scale=1,method="REML")

KMSurv<-survfit(Surv(time,status)~1,data=pbc)


We obtained survival probability estimates for the 6 combinations of time discretization for fitting (either a 5 or 10th order Lobatto rule) and prediction (a 5th, 10th or 20th order rule):

t<-seq(0,4500,100)
s1<-SurvGAM(t,fSurv1,GL5)
s2<-SurvGAM(t,fSurv1,GL10)
s3<-SurvGAM(t,fSurv1,GL20)
s4<-SurvGAM(t,fSurv2,GL5)
s5<-SurvGAM(t,fSurv2,GL10)
s6<-SurvGAM(t,fSurv2,GL20)

In all cases 1000 Monte Carlo samples were obtained for the calculation of survival probability estimates and their pointwise 95% confidence intervals. We can plot these against the Kaplan Meier curve estimates:

par(mfrow=c(2,3))
plot(KMSurv,xlab="Time (days)",ylab="Surv Prob",ylim=c(0.25,1),main="Fit(GL5)/Predict(GL5)")
lines(s1[1,],s1[2,],col="blue",lwd=2)
lines(s1[1,],s1[3,],col="blue",lwd=2,lty=2)
lines(s1[1,],s1[4,],col="blue",lwd=2,lty=2)

plot(KMSurv,xlab="Time (days)",ylab="Surv Prob",ylim=c(0.25,1),main="Fit(GL5)/Predict(GL10)")
lines(s2[1,],s2[2,],col="blue",lwd=2)
lines(s2[1,],s2[3,],col="blue",lwd=2,lty=2)
lines(s2[1,],s2[4,],col="blue",lwd=2,lty=2)

plot(KMSurv,xlab="Time (days)",ylab="Surv Prob",ylim=c(0.25,1),main="Fit(GL5)/Predict(GL20)")
lines(s3[1,],s3[2,],col="blue",lwd=2)
lines(s3[1,],s3[3,],col="blue",lwd=2,lty=2)
lines(s3[1,],s3[4,],col="blue",lwd=2,lty=2)

plot(KMSurv,xlab="Time (days)",ylab="Surv Prob",ylim=c(0.25,1),main="Fit(GL10)/Predict(GL5)")
lines(s4[1,],s4[2,],col="blue",lwd=2)
lines(s4[1,],s4[3,],col="blue",lwd=2,lty=2)
lines(s4[1,],s4[4,],col="blue",lwd=2,lty=2)

plot(KMSurv,xlab="Time (days)",ylab="Surv Prob",ylim=c(0.25,1),main="Fit(GL10)/Predict(GL10)")
lines(s5[1,],s5[2,],col="blue",lwd=2)
lines(s5[1,],s5[3,],col="blue",lwd=2,lty=2)
lines(s5[1,],s5[4,],col="blue",lwd=2,lty=2)

plot(KMSurv,xlab="Time (days)",ylab="Surv Prob",ylim=c(0.25,1),main="Fit(GL10)/Predict(GL20)")
lines(s6[1,],s6[2,],col="blue",lwd=2)
lines(s6[1,],s6[3,],col="blue",lwd=2,lty=2)
lines(s6[1,],s6[4,],col="blue",lwd=2,lty=2)



Survival probability estimates: Kaplan Meier curve (black) v.s. the PGAM estimates for different orders of Gauss Lobatto (GL) quadrature

Overall, there is a close agreement between the Kaplan Meier estimate and the PGAM estimates despite the different function spaces that the corresponding estimators “live”: the space of all piecewise constant functions (KM) v.s. that of the smooth functions with bounded, continuous second derivatives (PGAM). Furthermore, the 95% confidence interval of each estimator (dashed lines) contain the expected value of the other estimator. This suggests that there is no systematic difference between the KM and the PGAM survival estimators. This was confirmed in simulated datasets (see Fig 2 in our PLoS One paper).

### Survival Analysis With Generalized Additive Models : Part III (the baseline hazard)

May 2, 2015

In the third part of the series on survival analysis with GAMs we will review the use of the baseline hazard estimates provided by this regression model. In contrast to the Cox mode, the log-baseline hazard is estimated along with other quantities (e.g. the log hazard ratios) by the Poisson GAM (PGAM) as:

$log(h(t_{i,j})) = \beta_{0}+\lambda(t_{t,j})+\boldsymbol{x\beta} = \lambda_{I}(t_{i,j})+\boldsymbol{x\beta}$

In the aforementioned expression, the baseline hazard is equivalently modeled as a time-varying deviation ($\lambda(t)$) from a constant (the intercept $\beta_{0}$) , or as a time-varying function ($\lambda_{I}(t)$). In the latter case, the constant is absorbed into the smooth term. The choice between these equivalent forms is dictated by the application at hand; in particular, the intercept may be switched on or off by centering the smooth terms appearing in the call to the gam function. Hence, in the PGAM formulation the log-baseline hazard is yet another covariate that one estimates by a smooth function; other covariates may modify this hazard in a proportional fashion by additively shifting the log-baseline hazard ($\boldsymbol{x\beta}$).

In the “standard” way of fitting a PGAM by mgcv, the log-baseline hazard is estimated in the constant+deviation form. Exponentiation may be used to derive the baseline hazard and its standard errors. Continuing the analysis of the Primary Biliary Cirrhosis example from the second part of the series, we may write:

par(mfrow=c(2,2))
plot(fGAM,main="Gauss Lobatto (5)",ylab="log-basehaz")
plot(fGAM2,main="Gauss Lobatto (10)",ylab="log-basehaz")
plot(fGAM,main="Gauss Lobatto (5)",ylab="basehaz",trans=exp)
plot(fGAM2,main="Gauss Lobatto (10)",ylab="basehaz",trans=exp)

Log Baseline (top row) and Baseline (second row) hazard function in the PBC dataset for two different discretizations of the data. In all these cases, the baseline hazard (or its log) are given as time varying deviations from a constant (the value of the log-hazard where the confidence interval vanishes)

There is no substantial difference in the estimated obtained by the coarse (Gauss Lobatto (5)) and finer (Gauss Lobatto (10)) discretization. Note that as a result of fitting the log-hazard as constant+ time-varying deviation, the standard error of the curve vanishes at ~1050: the value of the log-hazard at that instant in events per unit time is provided by the intercept term.

Estimation of the log-baseline hazard allows the PGAM to function as a parametric, smooth alternative to the Kaplan Meier estimator. This will be examined in the fourth part of this series.

### Survival Analysis With Generalized Models: Part II (time discretization, hazard rate integration and calculation of hazard ratios)

May 2, 2015

In the second part of the series we will consider the time discretization that makes the Poisson GAM approach to survival analysis possible.

Consider a set of s$M$ individual observations at times $\mathcal{F}=\left\{ F_{i}\right\} _{i=1}^{M}$, with censoring indicators $\mathcal{D}=\left\{ \delta_{i}\right\} _{i=1}^{M}$ assuming the value of 0 if the corresponding observation was censored and 1 otherwise. Under the assumption of non-informative censoring, the likelihood of the sample is given by:

$L=\prod_{i=1}^{M}f(F_{i})^{\delta_{i}}S(F_{i})^{1-\delta_{i}}= \prod_{i=1}^{M}h(F_{i})^{\delta_{i}}\exp\left(-\int_{0}^{F_{i}}h(t)\mathrm{\, d}t\right)$

where $h(t)$ is the hazard function. By using an interpolatory quadrature rule, one may substitute the integral with a weighted sum evaluated at a distinct number of nodes.

$L=\prod_{i=1}^{M}\prod_{j=1}^{N_{i}}h(t_{i})^{d_{i,j}}\exp\left(-w_{j,j}h(t_{i,j})\right)$

where $t_{i,j}$, $w_{i,j}$  are the nodes, weights of the integration rule and $d_{i,j}$ is an indicator variable equal to 1 if the corresponding node corresponds to an event time and zero otherwise.  By including additional “pseudo-observations” at the nodes of the quadrature rule, we converted the survival likelihood to the kernel of a Poisson regression with variable exposures (weights).  Conditional on the adoption of an efficient quadrature rule, this is a highly accurate approximation:

Bounds of the Gauss Lobatto (GL) approximation error for the integration of survival data (MST=Mean Survival Time).

In order for the construct to work one has to ensure that the corresponding lifetimes are mapped to a node of the integration scheme.  In our paper, this was accomplished by the adoption of the Gauss-Lobatto rule. The nodes and weights of the Gauss-Lobatto rule (which is defined in the interval $[-1,1]$ depend on the Legendre polynomials in a complex way. The following R function will calculate the weights and nodes for the N-th order Gauss Lobatto rule:

GaussLobatto<-function(N)
{
N1<-N
N<-N-1
x=matrix(cos(pi*(0:N)/N),ncol=1)
x=cos(pi*(0:N)/N)
P<-matrix(0,N1,N1)
xold<-2
while (max(abs(x-xold))>2.22044604925031e-16) {
xold<-x
P[,1]<-1
P[,2]<-x

for (k in 2:N) {
P[,k+1]=( (2*k-1)*x*P[,k]-(k-1)*P[,k-1] )/k;
}

x<-xold-( x*P[,N1]-P[,N] )/( N1*P[,N1] )

}

w<-2./(N*N1*P[,N1]^2);
ret<-list(x=rev(x),w=w)
attr(ret,"order")<-N
ret
}

which can be called to return a list of the nodes and their weights:

> GaussLobatto(5)
$x [1] -1.0000000 -0.6546537 0.0000000 0.6546537 1.0000000$w
[1] 0.1000000 0.5444444 0.7111111 0.5444444 0.1000000

attr(,"order")
[1] 4

To prepare a survival dataset for GAM fitting, one needs to call this function to obtain a Gauss Lobatto rule of the required order. Once this has been obtained, the following R function will expand the (right-censored) dataset to include the pseudo-observations at the nodes of the quadrature rule:


GAMSurvdataset<-function(GL,data,fu,d)
## GL : Gauss Lobatto rule
## data: survival data
## fu: column number containing fu info
## d: column number with event indicator
{
## append artificial ID in the set
data$id<-1:nrow(data) Gllx<-data.frame(stop=rep(GL$x,length(data$id)), gam.dur=rep(GL$w,length(data$id)), t=rep(data[,fu],each=length(GL$x)),
ev=rep(data[,d],each=length(GL$x)), id=rep(data$id,each=length(GL$x)), gam.ev=0,start=0) ## Change the final indicator to what ## was observed, map node positions, ## weights from [-1,1] back to the ## study time Gllx<-transform(Gllx, gam.ev=as.numeric((gam.ev | ev)*I(stop==1)), gam.dur=0.5*gam.dur*(t-start), stop=0.5*(stop*(t-start)+(t+start))) ## now merge the remaining covariate info Gllx<-merge(Gllx,data[,-c(fu,d)]) Gllx }  We illustrate the use of these functions on the Primary Biliary Cirrhosis dataset that comes with R: data(pbc) > ## Change transplant to alive > pbc$status[pbc$status==1]<-0 > ## Change event code of death(=2) to 1 > pbc$status[pbc$status==2]<-1 > > head(pbc) id time status trt age sex ascites hepato spiders edema bili chol albumin copper 1 1 400 1 1 58.76523 f 1 1 1 1.0 14.5 261 2.60 156 2 2 4500 0 1 56.44627 f 0 1 1 0.0 1.1 302 4.14 54 3 3 1012 1 1 70.07255 m 0 0 0 0.5 1.4 176 3.48 210 4 4 1925 1 1 54.74059 f 0 1 1 0.5 1.8 244 2.54 64 5 5 1504 0 2 38.10541 f 0 1 1 0.0 3.4 279 3.53 143 6 6 2503 1 2 66.25873 f 0 1 0 0.0 0.8 248 3.98 50 alk.phos ast trig platelet protime stage 1 1718.0 137.95 172 190 12.2 4 2 7394.8 113.52 88 221 10.6 3 3 516.0 96.10 55 151 12.0 4 4 6121.8 60.63 92 183 10.3 4 5 671.0 113.15 72 136 10.9 3 6 944.0 93.00 63 NA 11.0 3 > > GL<-GaussLobatto(5) > pbcGAM<-GAMSurvdataset(GL,pbc,2,3) > head(pbcGAM) id stop gam.dur t ev gam.ev start trt age sex ascites hepato spiders 1 1 0.00000 20.0000 400 1 0 0 1 58.76523 f 1 1 1 2 1 69.06927 108.8889 400 1 0 0 1 58.76523 f 1 1 1 3 1 200.00000 142.2222 400 1 0 0 1 58.76523 f 1 1 1 4 1 330.93073 108.8889 400 1 0 0 1 58.76523 f 1 1 1 5 1 400.00000 20.0000 400 1 1 0 1 58.76523 f 1 1 1 6 2 0.00000 225.0000 4500 0 0 0 1 56.44627 f 0 1 1 edema bili chol albumin copper alk.phos ast trig platelet protime stage 1 1 14.5 261 2.60 156 1718.0 137.95 172 190 12.2 4 2 1 14.5 261 2.60 156 1718.0 137.95 172 190 12.2 4 3 1 14.5 261 2.60 156 1718.0 137.95 172 190 12.2 4 4 1 14.5 261 2.60 156 1718.0 137.95 172 190 12.2 4 5 1 14.5 261 2.60 156 1718.0 137.95 172 190 12.2 4 6 0 1.1 302 4.14 54 7394.8 113.52 88 221 10.6 3 > > dim(pbc) [1] 418 20 > dim(pbcGAM) [1] 2090 24 The original (pbc) dataset has been expanded to include the pseudo-observations at the nodes of the Lobatto rule. There are multiple records (5 per individual in this particular case) as can be seen by examining the data for the first patient (id=1). The corresponding times are found in the variable stop, their associated weights in the variable gam.dur and the event indicators are in the column gam.ev. Note that nodes and weights are expressed on the scale of the survival dataset, not in the scale of the Lobatto rule ($[-1,1]$). To fit the survival dataset one needs to load the mgcv package and fit a Poisson GAM, using a flexible (penalized spline) for the log-hazard rate function. The following code will obtain an adjusted (for age and sex) hazard ratio using the PGAM or the Cox model:  library(survival) ## for coxph > library(mgcv) ## for mgcv > > ## Prop Hazards Modeling with PGAM > fGAM<-gam(gam.ev~s(stop,bs="cr")+trt+age+sex+offset(log(gam.dur)), + data=pbcGAM,family="poisson",scale=1,method="REML") > > ## Your Cox Model here > f<-coxph(Surv(time,status)~trt+age+sex,data=pbc) > > > summary(fGAM) Family: poisson Link function: log Formula: gam.ev ~ s(stop, bs = "cr") + trt + age + sex + offset(log(gam.dur)) Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -10.345236 0.655176 -15.790 < 2e-16 *** trt 0.069546 0.181779 0.383 0.702 age 0.038488 0.008968 4.292 1.77e-05 *** sexf -0.370260 0.237726 -1.558 0.119 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(stop) 1.008 1.015 4.186 0.0417 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = -0.249 Deviance explained = 2.25% -REML = 693.66 Scale est. = 1 n = 1560 > f Call: coxph(formula = Surv(time, status) ~ trt + age + sex, data = pbc) coef exp(coef) se(coef) z p trt 0.0626 1.065 0.182 0.344 7.3e-01 age 0.0388 1.040 0.009 4.316 1.6e-05 sexf -0.3377 0.713 0.239 -1.414 1.6e-01 Likelihood ratio test=22.5 on 3 df, p=5.05e-05 n= 312, number of events= 125 (106 observations deleted due to missingness)  The estimates for log-hazard ratio of the three covariates (trt, age, and female gender) are numerically very close. Any numerical differences reflect the different assumptions made about the baseline hazard: flexible spline (PGAM) v.s. piecewise exponential (Cox). Increasing the number of nodes of the Lobatto rule does not materially affect the estimates of the PGAM:  GL<-GaussLobatto(10) > pbcGAM2<-GAMSurvdataset(GL,pbc,2,3) > fGAM2<-gam(gam.ev~s(stop,bs="cr")+trt+age+sex+offset(log(gam.dur)), + data=pbcGAM2,family="poisson",scale=1,method="REML") > > summary(fGAM2) Family: poisson Link function: log Formula: gam.ev ~ s(stop, bs = "cr") + trt + age + sex + offset(log(gam.dur)) Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -10.345288 0.655177 -15.790 < 2e-16 *** trt 0.069553 0.181780 0.383 0.702 age 0.038487 0.008968 4.292 1.77e-05 *** sexf -0.370340 0.237723 -1.558 0.119 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(stop) 1.003 1.005 4.163 0.0416 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = -0.124 Deviance explained = 1.7% -REML = 881.67 Scale est. = 1 n = 3120  Nevertheless, the estimates of the “baseline log-hazard” become more accurate (decreased standard errors and significance of the smooth term) as the number of nodes increases. In simulations (see Fig 3) we show that the estimates of the hazard ratio generated by the GAM are comparable in bias, variance and coverage to those obtained by the Cox model. Even though this is an importance benchmark for the proposed method, it does not provide a compelling reason for replacing the Cox model with the PGAM. In fact, the advantages of the PGAM will only become apparent once we consider contexts which depend on the baseline hazard function, or problems in which the proportionality of hazards assumption is violated. So stay tuned. ### Survival Analysis With Generalized Additive Models : Part I (background and rationale) May 1, 2015 After a really long break, I’d will resume my blogging activity. It is actually a full circle for me, since one of the first posts that kick started this blog, matured enough to be published in a peer-reviewed journal last week. In the next few posts I will use the R code included to demonstrate the survival fitting capabilities of Generalized Additive Models (GAMs) in real world datasets. The first post in this series will summarize the background, rationale and expected benefits to be realized by adopting GAMs from survival analysis. In a nutshell, the basic ideas of the GAM approach to survival analysis are the following: 1. One approximates the integral defining the survival function as a discrete sum using a quadrature rule 2. One evaluates the likelihood at the nodes of the aforementioned quadrature rule 3. A regression model is postulated for the log-hazard rate function 4. As a result of 1-3 the survival regression problem is transformed into a Poisson regression one 5. If penalized regression is used to fit the regression model, then GAM fitting software may be used for survival analysis Ideas along the lines 1-4 have been re-surfacing in the literature ever since the Proportional Hazards Model was described. The mathematical derivations justifying Steps 1-4 are straightforward to follow and are detailed in the PLoS paper. The corresponding derivations for the Cox model are also described in a previous post. Developments such as 1-4 were important in the first 10 years of the Cox model, since there were no off-the-shelf implementations of the partial (profile) likelihood approach. This limited the practical scope of proportional hazards modeling and set off a parallel computational line of research in how one could use other statistical software libraries to fit the Cox model. In fact, the first known to the author application of a proportional model for the analysis of a National Institute of Health (NIH) randomized controlled trial used a software implementing a Poisson regression to calculate the hazard ratio. The trial was the NCDS trial that examined adequacy indices for the hemodialysis prescription (the description of the software was published 6 months prior to the clinical paper). Many of these efforts were computationally demanding and died off as the Cox model was implemented in the various statistical packages after the late 80s and semi-parametric theory took off and provide a post-hoc justification for many of the nuances implicit in the Cox model. Nevertheless, one can argue that in the era of the modern computer, no one really needs the Cox model. This technical report and the author’s work on a real world, complex dataset provides the personal background for my research on GAM approaches for survival data. The GAM (or Poisson GAM, PGAM as called in the paper) is an extension of these old ideas (see the literature survey here and here). In particular, PGAM models the quantities that are modeled semi-parametrically (e.g. the baseline hazard) in the Cox model with parametric, flexible functions that are estimated by penalized regressio. One of the first applications of penalized regression for survival analysis is the Fine and Gray spline model, which is however not a PGAM model. There are specific benefits to be realized from penalizing the Poisson regression and adopting GAMs in the context of survival analysis: • Parsimony: degrees of freedom are not wasted as penalization will seek the most parsimonious representation (fewer degrees of freedom) among the many possible functions that may fit the data • Reduction of the analyst-degrees-of freedom: the shape of the functional relationships between survival and outcome are learned from the data. This limits the potential of someone putting a specific form for this relationship (e.g. linear v.s. quadratic) and running away with the most convenient p-value • Multiple time scale modelling: one can model more than one time scales in a dataset (i.e. not just the study time). This is useful when adjusting for secular trends in an observational dataset or even in a randomized trial. In particular cluster randomized trials at the community level may be immune to secular trends • Non-proportional hazards modeling: when the hazards are not proportional, the Cox model is not applicable. Many oncology datasets will demonstrate a deviation from proportionality (in fact we re-analyzed such a trial in the PLoS paper) . For a particular dataset, one would like to know whether the proportionality assumption is violated, and if so one would like to “adjust” for it. Such an adjustment will take the form of a time-varying hazard ratio function and these may be estimated with the PGAMs. In such a case, one can even extract an “average” hazard ratio while still estimating a time-varying deviation around it using the PGAM. However non-proportionality should shift the analyst to: • Alternative measures of treatment effect: These may include relative risks, absolute risk differences of even differences in the (Restricted) Mean Survival Time. Such measures are calculated from the time varying hazard ratios using statistical simulation techniques • Handling of correlated outcomes: Correlated outcomes may arise from center effects, multiple events in the same individual or even cluster randomization designs. The analysis of such outcomes is facilitated by the interpretation of the PGAM as a generalized linear mixed model and the introduction of the corresponding random effects and their correlation structure into the regression • Center effects: A variety of modeling options are available including stratified hazards, fixed or random effects • Subgroup analyses • Time varying external covariates • Unconditional/Conditional/Population averaged effects: The unconditional estimate is obtained by indexing the individuals with the group they are assigned to (e.g. placebo or drug in an interventional trial). The conditional estimate is obtained by introducing covariates (e.g. gender, age) into the regression to calculate effects for individuals that share these characteristics. The population effect averages over the conditional effects over all the individuals in the trial. In epidemiology it is known as the corrected group prognosis method. This was introduced in a JAMA paper almost 15 years ago as a way to generate adjusted survival probability curves • Handling of right censored/left truncated/uncensored data These benefits follow directly from the mixed model equivalence between semi-parametric, penalized regression and Generalized Mixed Linear Models. An excellent, survey may be found here, while Simon Wood’s book in the GAM implementation of the mgcv package in R contains a concise presentation of these ideas. As it stands the method presented has no software implementation similar to the survival package in R. Even though we provide R code to run the examples in the paper, the need for the various functions may not be entirely clear. Hence the next series of posts will go over the code and the steps required to fit the PGAM using the R programming language. ### The little non-informative prior that could (be informative) November 26, 2013 Christian Robert reviewed on line a paper that was critical of non-informative priors. Among the points that were discussed by him and other contributors (e.g. Keith O’Rourke), was the issue of induced priors, i.e. priors which arise from a transformation of original parameters, or of observables. I found this exchange interesting because I did something similar when revisiting an old research project that had been collecting digital dust in my hard disk. The specific problem had to do with analysis of a biomarker that was measured with a qualitative technique yielding a binary classification of measurements as present or absent, in two experimental conditions (call them A and B). Ignoring some technical aspects of the study design, the goal was to calculate the odds ratio of the biomarker being expressed in condition B v.s A (the reference state signifying absence of disease). When writing the programs for the analysis, I defaulted to the N(0.0,1.0E-6) prior that epitomizes non-informativeness in BUGS. However, one of my co-authors asked the “What the @#$%& does this prior mean?” question. And then we stopped … and reflected on what we were about to do. You see, before the experiment started we had absolutely no prior information about the behaviour of the biomarker in either experimental state so that we did not want to commit one way or another. In other words, Laplace’s original uniform (or Beta(1,1)) prior would have been reasonable if the expression data for  A and B were to be analyzed separately. However, we wanted to analyze the data with a logistic regression model, so was the ubiquitous N(0.0,1.0E-6) the prior we were after?

The answer is a loud NO! According to Wikipedia, the mother of all knowledge, the logistic transformation of a uniform variate is the logistic distribution with location of zero and scale of 1. Hence, the prior on the intercept of the logistic regression (interpretable as the odds of the biomarker being expressed in state A) had to be a Logistic(0,1).

Surprisingly the Odds Ratio of B v.s. A was found (after trial and error and method of moments considerations) to be very well approximated by a 1:1 mixture of a logistic and a Gaussian which clearly departs from the N(0.0,1.0-6) prior we (almost) used:

Bottom line: Even informative (in the BUGS sense!) priors can be pretty non-informative in some intuitively appropriate parameterization. Conversely, one could start with a non-informative prior in a parameterization that is easier to reason about and look for an induced prior (using analytic considerations or even simulations) to convert it to a parameterization that is more appropriate to the analytic plan at hand.

(R code for the plots and simulations is given below)

## approximating uniforms
logit<-function(x) log(x/(1-x))
set.seed(1234)
N<-10000000
s<-runif(N,0,1);
s2<-runif(N,0,1);
y<-logit(s)
y2<-logit(s2)
m<-mean(y)
s<-sd(y)
x<-seq(-10,10,.1)
## logistic is logit of a uniform
hist(y,prob=TRUE,breaks=50,main="intercept",
xlab="logit(A)")
lines(x,dnorm(x,m,s),col="red")
lines(x,dlogis(x,0,1),col="blue")
legend(-15,0.20,legend=c("Normal(0,1)",
"Logistic(0,1)"),lty=1,col=c("blue","red") )

## approximating the difference of two uniforms
hist(y-y2,prob=TRUE,ylim=c(0,.25),breaks=200,
xlim=c(-10,10),main="OR between two U(0,1)",
xlab="logit(B)-logit(A)")
## logistic approximation
lines(x,dlogis(x,0,sqrt(2)),col="blue",lwd=2)
## normal
lines(x,dnorm(x,0,(pi)*sqrt(2/3)),col="red",lwd=2)
## mixture of a logistic and a normal approximation
lines(x,0.5*(dlogis(x,0,sqrt(2))+
dnorm(x,0,(pi)*sqrt(2/3))),col="green",lwd=2)
## legends
NL<-expression(paste("Normal(0,",pi*sqrt(2/3),")"))
LL<-expression(paste("Logistic(0,",sqrt(2),")"))
ML<-expression(paste("0.5 Normal(0,",pi*sqrt(2/3),")+0.5 Logistic(0,",sqrt(2),")"))
legend(-6.5,0.25,legend=c(NL,LL,ML),
lty=1,col=c("blue","red","green") )

## does it extend to more general cases?
m1<--2;m2<-2;s1<-1;s2<-2.5;
l1<-rlogis(N,m1,s1)
l2<-rlogis(N,m2,s2)
d<-l1-l2
hist(d,prob=TRUE,ylim=c(0,0.25),breaks=200)
plot(density(d))
lines(x,dlogis(x,m1-m2,sqrt(s1^2+s2^2)),col="green",lwd=2)
lines(x,dnorm(x,m1-m2,pi*sqrt((s1^2+s2^2)/3)),col="red",lwd=2)
lines(x,0.5*(dnorm(x,m1-m2,pi*sqrt((s1^2+s2^2)/3))+
dlogis(x,m1-m2,sqrt(s1^2+s2^2))),col="blue",lwd=2)


Edit (29/11/2013):
Updated the first image due to an accidental reversal of the distribution labels