Archive for October, 2013

Is this evidence of scientific fraud?

October 30, 2013

(the names of countries, journals, interventions and sample sizes have been changed to protect the potentially innocent and to avoid cognitive biases, invocations of  stereotypes and accusations that could lead to World War Z)

Two years ago the prestigious medical journal The Spleen published a randomized controlled trial that evaluated three therapies: genetically modified leeches (GML), standard leeches (SL) and mechanical blood-letting (MBL, standard of care) for acute complicated absentmindedness (ACA). This single center study randomized, in 1:1:1 ratio , 240 patients who presented at the Emergency Department of the Grand Fenwick Memorial Hospital and concluded that GML was associated with a 90% improvement in outcomes relative to SL and MBL. The lead author, a prominent absentmindedneologist and President of the corresponding Society of the Duchy of Grand Fenwick, concluded that GML should become the new standard of care and that SL and MCL should not be offered except as second line treatment for patients failing GML.

(more…)

Advertisements

Table as an image in R

October 28, 2013

http://www.r-bloggers.com/table-as-an-image-in-r/

Useful when cramming data into multipanel images and do not feel like toiling away in LATeX

R Booklets for biostats,bioinfo and time series analysis

October 28, 2013

Useful booklets by the Sanger Institute on:
Biomedical Statistics,
bioinformatics and
time series

These can get one started in R (the biomedical statistics one can be used for a medical student’s/resident/fellow research project).

Robert Heinlein and the distinction between a scientist and an academician

October 27, 2013

Robert Heinlein, the author of Starship Troopers(no relationship to the movie by the way) wrote an interesting paragraph in his 1939 short story Life-Line:

One can judge from experiment, or one can blindly accept authority. To the scientific mind, experimental proof is all important and theory is merely a convenience in description,
to be junked when it no longer fits. To the academic mind, authority is everything and facts are junked when they do not fit theory laid down by authority.

This short paragraph summarises the essence of the differences between Bayesian (scientific mind) and frequentist (academic mind) inference or at least their application in scientific discourse.

For objective Bayesians, models are only convenience instruments to summarise and describe possibly multi-dimensional data without having to carry the weight of paper, disks, USB sticks etc containing the raw points. Parameters do the heavy lifting of models and the parametric form of a given model may be derived in a rigorous manner using a number of mathematical procedures (e.g. maximum entropy). Given such a specification, one can use an empiric body of data D to calculate P(M|D) sequentially rejecting models that do not fit (a nice example is given in the second section of Jayne’s entropy concentration paper).

Now consider the situation of the frequentist mind: even though one can (and most certainly will!) use a hypothesis test (aka reject the null) to falsify the model, the authoritarian can (and most certainly will!) hide behind the frequentist modus operandi and claim that only an unlikely body of data was obtained, not that an inadequate model was utilized.

This is seen (and in fact enforced) in the discussion of every single scientific paper under the standardized second-to-last paragraph about ‘alternative explanations’. This section is all about bowing down to authority, offering (often convoluted and mystical) explanations that the data obtained are at fault and that the model falsified in the results section is in fact true. Depending on the weight of the authoritative figures who will write the commentary about the aforementioned paper, we can (and most certainly will!) end up in the highly undesirable situation of falsifying data rather than models.

Compare this with the hypothetical structure of a Bayesian paper in which the alternative hypotheses would be built as alternative models (or values of parameters) only to be systematically compared and the ill-fitting models (even those held to be true by academic figures of the highest authority) are triaged to oblivion.

As a concluding statement, note that our systematic failure to respond to the financial crisis or even to advance science in the last 3-4 decades can be traced to the dominating influence of academicians over scientists. Rather than systematically evaluating evidence for or against particular models in specific domains, we seem to only judge models/explanations by the authority/status of their proponents, a situation not unlike the one in the 30s when Heinlein wrote the aforementioned piece.