## Archive for the ‘Uncategorized’ Category

### My 2015 blogging report

December 29, 2015

### The Weibull distribution is useful but its parameterization is confusing

June 5, 2015

The Weibull distribution is a very useful generalization of the exponential distribution that frequently appears in analysis of survival times and extreme events. Nevertheless it is a confusing distribution to use due to the different parameterizations that one finds in the literature

http://sites.stat.psu.edu/~dhunter/525/weekly/weibull.pdf

http://psfaculty.ucdavis.edu/bsjjones/slide3_parm.pdf

http://stats.stackexchange.com/questions/18550/how-do-i-parameterize-a-weibull-distribution-in-jags-bugs

So be aware 🙂

### Machine Learning Cheat Sheet

May 24, 2015

Simply excellent – includes a section of Bayesian vs Frequentist Analyses

### Kicking ass with Bayesian Statistics in R

November 22, 2013

Some excellent R posts regarding Bayesian statistics:

1. How to program the Laplace approximation in R:

http://www.r-bloggers.com/easy-laplace-approximation-of-bayesian-models-in-r/

Though heavily dominated by Monte Carlo methods, Bayesian computation with the Laplace expansion is a nice tool to deploy in cases your MCMC fails to converge. Plus it makes one appreciate Laplace’s genius.

2. A bird’s eye view of R’s Bayesian analysis facilities:

http://blog.revolutionanalytics.com/2013/11/r-and-bayesian-statistics.html

Watch this blog for a series of posts about Bayesian survival analysis with R, BUGS and stan.

### Failed Randomization In A Randomized Trial?

November 5, 2013

We will continue the saga of the three-arm clinical trial that is giving the editors of the prestigious journal The Spleen a run for their money. While the polls are gathering digital dust, let’s see if we can direct this discussion to a more quantitative path. To do so, we will ask (and answer) the question from a frequentist point; according to this approach we raise the red flag if the event under examination is rare assuming a hypothesis about the state of the world (null hypothesis $H_0$) is true.

In this case the null hypothesis is that the investigators at Grand Fenwick Memorial did run a randomized control trial under a simple randomization scheme, in which each patient had equal chance to be given one of the three interventions: GML, SL or MBL. To calculate the rarity of the observed pattern, we need to define an appropriate event and then figure out its rarity (“long-term frequency”) in many repetitions of the randomization allocation scheme used in the trial.

Considering the number of patients in the three arms of the trial, 105/70/65, v.s. the expectation of 80/80/80  it would appear that the most influential factor in determining the “rarity” of the observed pattern is the difference in size between the largest and the smallest arm in the trial.  On the other hand a difference of 5 between the second largest and the smallest arms would not appear to be worthy of consideration, at least as a first approximation. To determine the long term frequency of the event in a trial with 240 patients, we will use the R language to carry out a large number of these hypothetical allocations and figure out the number of those in which the difference in size between the largest and smallest arms exceeds 40:

``` event<-c(105,70,65)  ## observed pattern
## computes the difference in size between arms
frequentist2<-function(x,l1=40) {
x<-sort(x,decreasing=TRUE)
I((x[1]-x[3])>=l1)
}
set.seed(4567) ## for reproducibility
## hypothetical trials
g<-t(rmultinom(500000,sum(prob),c(1,1,1)))
## flags the repetitions of the studies in which a rare
## event was observed and calculates the frequency (in %)
res3<-apply(g,1,frequentist2);mean(res3)*100
```

This number comes out to be 0.5%. In other words, 1 out of 200 randomized trials that assign patients with equal probability to three arms will generate an imbalance of this magnitude.
But is this the answer we are trying to obtain? In other words the situation that the editors of The Spleen face is to evaluate the likelihood that patients were not randomly assigned to the three interventions. This evaluation is only indirectly related to the rarity of observing a large size difference in the arms of a trial that did not cheat. By not considering directly the hypothesis of foul-play (unequal allocation probabilities in the three arms), both the investigators and their accusers will find themselves in endless quarrel about the interpretation of rarity as a chance finding v.s. an improbable one indicative of fraud.

### The probability of stacked mass murders is not so small

November 4, 2013

Christian Robert estimates the probability of the observed pattern of 4 mass murders in 4 days and founds it to be 2.8% per year or 18% in any seven year period!

http://xianblog.wordpress.com/2013/11/04/unusual-timing-shows-how-random-mass-murder-can-be-or-not/

### Diagrams for hierarchical models

November 2, 2013

Nice alternative to BUGS like diagrams.
Can communicate distributional assumptions, grouping of variables and seems a good alternative to the inclusion of code in long online supplements!!

http://t.co/wOyu1mSSaY

### R Booklets for biostats,bioinfo and time series analysis

October 28, 2013

Useful booklets by the Sanger Institute on:
Biomedical Statistics,
bioinformatics and
time series

These can get one started in R (the biomedical statistics one can be used for a medical student’s/resident/fellow research project).

### Probability density functions in R and Bugs

September 30, 2013

Kudos to @ResearchProcess’s for finding and tweeting this: https://twitter.com/ResearchProcess/status/384527302106693634

### Data as stories, models as narratives

June 21, 2013

Ever since the first humans gathered around their first fires (or even before that!) we absolutely love to listen (and tell) stories, parables, narratives about real or fictional events. Notwithstanding the important roles these activities played in facilitating social organization, there are reasons to pay particular attention to modes of story telling if we are to understand the way science works. By that I do not mean the particular mechanics of a given scientific theory (e.g. how atoms are structured, whether a group of medications works in a disease or even if stimulus packages or austerity works), but rather how theories come about, how they flourish and then abandoned for something else. The sociology of the scientific process has been described  by Thomas Kuhn in his Structure of Scientific Revolutions, but drawing an analogy with more familiar territory may be of some value.
(more…)