Archive for September, 2013

Probability density functions in R and Bugs

September 30, 2013

Kudos to @ResearchProcess’s for finding and tweeting this:


The agnostic approach to effectiveness

September 3, 2013
To the agnostic, real world experience reins supreme when it comes to evaluating effectiveness. The Randomized Controlled Trial (RCT) results are only relevant to the extent they demonstrate that both success and failure are indeed possible when using a new therapy. However, the numerical (efficacy) estimates are not relevant for the agnostic and should be discounted when real world effectiveness is to be assessed. The latter can only be ascertained by looking at the outcomes of actual patients so in order to acknowledge the RCT results, they too need to be cast in this format. The extreme agnostic attitude would do so in a manner that minimizes to the greatest possible extent the impact of the reported RCT efficacy on inferences, e.g. by appraising it as worthy of one success and one failure in real world patients. These “pseudocases” are added to the corresponding numbers obtained in the real world and the agnostic proceeds to apply Bayes theorem. Mathematically the agnostic assumes a prior in which the prior probability of success can be any number between 0  and 1 (the Laplace prior) to represent the two extreme viewpoints of the therapy, i.e. it is either rat poisson or holy water.
The agnostic attitude will not mislead the clinician, even for small number of real world cases:
but it will severly impair the clinician’s ability to make precise statements:
until the time that an extremely large body of evidence has been analyzed:
Hence, the agnostic does pay a premium by not fully acknowledging the trial results. This premium, which can be described as being too uncertain about one’s uncertainty may even have practical implications e.g. if a patient decides against the new therapy because of the clinician’s uncertainty regarding the therapy of evidence. It is important thus for the skeptic to keep in mind that lack of evidence is not evidence of lack!

The agnostic approach to effectiveness

September 3, 2013

The second attitude towards effectiveness is an agnostic one; such a clinician  considers previous real world experience more relevant than efficacy figures when it comes to assessing effectiveness. Randomized Controlled Trial (RCT) results are not dismissed, but are discounted nonetheless at a very high rate. To the agnostic reading about the efficacy of an intervention in an RCT only implies that both success and failure are possible outcomes but the reported figures are not relevant.  If the outcomes in real world patients rein supreme, then the trial results should be quantified in such a way as to correspond to such an experience. An extreme version of agnosticism will mathematically translate such an assessment to have the minimum possible influence, or equivalently the least number of “additional” patients that should be added to the real world record: one success and one failure “pseudocase”.

Illussion of Effectiveness in the ‘definitive’ clinical trial

September 2, 2013

The believer’s attitude is one of unconditional trust to the results of the randomized clinical trial (RCT). The latter, not only provides “unbiased” estimates of the relative efficacy of two more therapies, but also furnishes numerical estimates of the absolute efficacy that translate more or less into the outcomes of real world clinical practice. The believer will thus views the results obtained in the clinic as interchangeable with the ones observed in the RCT, so that the mathematically consistent way to jointly examine them is to simply add together the corresponding successes and failures. This approach will work just fine if the underlying premise of equivalency between effectiveness and efficacy is true, yet it will backfire otherwise.

To see why, consider what happens in the hypothetical thought experiment previously outlined:
So for real world experience reflective of a single individual (or even a single practice i.e. 20-100) patients, the magnitude of effectiveness will likely be overstated (since most therapies don’t work as well as advertised in papers). It will take a considerable number of patients (>1000 and likely 10000) to align the believer’s expectations with real world results. 
Such large number of patients from a single condition are unlikely to be encountered in a single individual’s professional lifetime (especially if the condition is rare) so that a believer is stuck in an “evidential blackhole”. Being trapped by the large number of patients (the gravity) of the definitive clinical trial, he or she is forced to discount personal experience for results that are only partially relevant to the patients they actually treat!
Furthermore the believer will substantially underestimate the precision of the estimate; when asked to produce an estimate below which the effectiveness is expected to be found with a small probability e.g. 5% the following figure can be obtained:
Hence even if that physician sounds confident that the therapy works in between 45-50% of patients, this is a gross overestimate and does not even bracket the “true” effectiveness unless the outcomes in alarge number of real world patients are examined:

Four basic attitudes towards efficacy and its relation to effectiveness

September 1, 2013

I will continue this series of posts regarding the appraisal of efficacy (“how well a therapeutic intervention worked in a randomized experiment”) and its translation to statements about effectiveness (“how well the intervention worked in the real world”), by considering the attitudes that one adopt towards these issues. The aim is to develop a sophisticated approach, or rather a vantage point that one would almost always want to adopt when considering the implications of having data about the efficacy (results of a trial) and effectiveness (success rate in real world practice). However the vantage point will only become evident by considering a basic set of attitudes, which are described here: (more…)