Archive for August, 2013

How to be uncertain about a therapy’s effects without being inconsistent

August 22, 2013

A clinician contemplating the use of a therapy faces one of the most difficult mental tasks: the weighting of external evidence (in the form of published scientific literature, product labels and testimonies from colleagues) against one’s personal opinion (whether this is grounded in previous experience, specific scientific understanding or other poorly differentiated factors).

The external evidence can take many forms, but in the case of therapies backed by hard outcomes data it will usually be summarized in the form of frequency (percentage of successes, f) over a large number of patients (N) who participated in a number of systematic evaluations, e.g. Randomized Controlled Trials, registries or industry-sponsored post-authorization studies).

On the other hand the personal opinion is more likely to be un-differentiated, with the possible exception of previous experience in which an astute clinician may draw from a personal series of successes and failures. For a new therapy, such experience may not even be available leaving the clinician entirely uncertain about the success rate of what’s about to be prescribed.

So how can one combine these diverse sources into an internally consistent evaluation that is practical and quantitative so that the expectations of success can be made explicit to patients?

As I have said in a previous post adoption of a Bayesian perspective is all that’s required to simultaneously achieve:

      integration of diverse sources of evidence with prior opinion
      avoidance of inconsistencies
    quantitative practicality

To illustrate Bayesian reasoning we will assume the simplest case of a clinician who has just read this great paper mentioned at the beginning of the post but without any previous experience in which to ground his prior opinion.
Striving to maintain maximum impartiality, favoring neither optimism nor pessimism about the therapy what is the prior probability that the clinician should use? An assignment that is often use is for the clinician to augment the actual number of successfully treated patients in the trial, S, and the number of failures, F, with fictional pseudo-observations that express such maximum impartiality (or ignorance).

The usual conjugate assignment is the use of the Beta distribution with s and f number of pseudo-successes and failures. The three uninformative priors that can be used to express maximum impartiality under different states of ignorance are:

  • The Haldane prior with f=s=0, when one is not even certain whether both success or failure are possible (eg the therapy can save or kill everyone)
  • The Jeffrey’s prior with f=s=1/2 which expresses ignorance in both the odds scale
  • The rule of succession prior f=s=1 advocated by Laplace

Irrespective of which prior is used, after obtaining the trial data one updates his belief about the probability of success to:
\frac{S+s}{S+s+F+f}

This formula shows the overwhelming impact the trial data can have over the prior; for a trial with 40 successes and 60 failures, the three priors yield the following expectations for the success rate: 40%, 40.09% and 40.19% respectively.

With such conjugate prior assignments, one can take into account one’s personal experience by simply adding one’s successes, o_s and failures o_f to yield the following expression for the expectation of the success rate:
\frac{S+s+o_s}{S+s+o_s+F+f+o_f}

The aforementioned formula immediately suggests one reason for our trust for the result of a large trial over our experience and why we would not be swayed much by a small to moderate trial in an area in which we have a large personal experience. In the former case the fraction is determined by thew number of successes and failures in the trial, whereas under the second scenario our own experience dominates the data from the study.

Advertisements

Reflections on Effectiveness = Efficacy / 2

August 20, 2013

My recent post on the initial appraisal of a therapy’s effectiveness based on a randomized trial reporting specific data about its efficacy generated some interesting comments on twitter. In particular, David Juurlink (@DavidJuurlink) commented:

@ChristosArgyrop @medskep meant on E=E/2, which I’d say the post-RALES experience with spironolactone invalidates

The point that David made seems to be that that the formula Effectiveness = Efficacy/2 is way too conservative and that the post-RALES experience illustrates this point. This is a great objection, one that lies at the heart of inductive reasoning which is what we essentially do when we speak about either effectiveness or efficacy. To answer this objection (both in its specific post-RALES and its more general form) I will need a couple of posts but first I believe a little bit of background is called for.

RALES was a landmark trial, published almost 15 years ago, about a novel approach (a drug called spironolactone) to treating heart failure, a condition with a very high mortality and hospitalization rate. RALES showed an almost 30% reduction in the risk of death and was a paradigm shifting study: immediately after the publication of RALES predescriptions of spironolactone increased worldwide and in 2013 many of these patients are taking on spironolactone-like.
It is the personal opinion of many cardiologists (and mine) that spironolactone saved the life’s of their real world patients (ie the drug is effective), yet the published track record is not that clear, with partially mixed evaluations of outcomes at least in the elderly and safety concerns (in my opinion,also held by others, almost entirely due to wild extrapolation of study results  inappropriate use and inadequate monitoring by prescribing physicians). It is precisely such considerations that called for further evaluation of the efficacy, effectiveness and safety of the drug almost immediately (in the time scale of clinical research) after the publication of RALES.

So the Effectiveness = Efficacy/2 shorthand formula seems to be vindicated by the track record of spironolactone since RALES. However I would go even further and claim that the drug would have not missed on its potential to save lives in the real world and be more widely used today had this viewpoint been adopted from the outset.

To see why note that the rule has a companion concerning what I call the ‘sail-through rate’: the proportion of real-world patients who will not experience an adverse event while taking the therapy. The healthy skeptic who is using the trial data and nothing else should also expect the sail-through rate to be half the one reported in the trial publication.
Hence, the combination of these too evaluations (which are really flipsides of the same mathematical coin) might have led a more cautious adoption of spironolactone as physicians would have halved their initial expectations about the benefit and the lack of trouble. What happened is that the prescriptions increased by 700% and complications (mainly hyperkalemia and renal failure/insufficiency/injury) sky-rocketed as physicians held out the scripts in expectation of benefit for their patients. The skeptic approach might have led one to become more familiar with the drug’s pharmacological and safety profile. This could have taken for example to one spending some quality time with a clinical pharmacology textbook to refresh one’s memory. Even better, one could adopt the dosing/monitoring protocol used by the randomized trial trying to reproduce the results in his or her practice. That physician can probably be much less conservative in his or her assessment of effectiveness. Rather than saying that the effectiveness (proportion of responders) can be any number between 0 and trial efficacy (a heuristic with which to understand the mathematics behind the rule) this physician even expect that the trial data will reflect the real world experience.

This is a crucial point and one that is rarely made: one of the causes of the apparent failure of increased efficacy to translate to increased effectiveness has to do with contextual elements that are not adopted in the real world. For spironolactone this requires pre-treatment screening and frequent monitoring of renal function and potassium levels, as the post-RALES publication record reveals.

In summary I feel that the mathematics of skepticism were in fact validated by RALES. However one would like to do better and answer the objection of David in the general case: at what point and how do we dispense with the skepticism? When is initial skepticism justified? How do we combine real word experience with expectations based on trial data and a priori beliefs about a given therapy’s benefits to inform our knowledge and advice our patients?

These will be covered in follow up posts.

In the absence of real world data the effectiveness of a clinical intervention is half its efficacy (in a randomized trial)

August 14, 2013

Suppose one is approached by one’s partner with the results of a new intervention that helped X% of N carefully chosen participants in a Randomized Controlled Trial (RCT), with only minimal adverse events (seen in Y% of the N patients). The colleague, a TRUE BELIEVER – champion of innovation and defender of progress against the medical luddites of this world, wants to convince you to implement this new therapy as part of the standard protocol in your common practice. He is thinking that it should be offered to all newcomers, including patients who would not have been eligible to participate in the aforementioned trial. What would a healthy sceptic do? Champion innovation and adopt the new therapy on the spot, or defend tradition and wait? Is it possible to ground the answer in the cold, objective language of math and warm up to/cool down your partner accordingly? (more…)

The probability that one random variable is smaller or larger than a Beta random variable

August 14, 2013

This super wonkish post will serve as convenient basket case for all the inglorious math that will be required for a series of more Evindence Based Medicine oriented posts. A result that will be repeatedly required in these posts is an expression for the probability that one random variable is smaller (or larger) than a Beta random variable. The necessity for this result is due to the ability of the Beta distribution to quantitate beliefs about the percentages or proportions of dichotomous outcomes , having observed \alpha  “successes” and \beta  “failures”.  So if one had just read about the efficacy  (E) of an intervention in a Randomized Controlled Trial (RCT), the Beta distribution would be a readily available candidate to summarize the uncertainty about the efficacy as B(E|p\,N,(1-p)\,N) where p is the proportion of responders and N the number of study participants. (more…)

The expectation of the ratio of two random variables

August 4, 2013

I was recently revising a paper concerning statistical simulations of hemodialysis trials, in which I examine the effects of different technical aspects of the dialysis prescription at the population level. I had used the reported figures from a number of recent high profile papers, when I noticed that while the results were right on average, there was a substantial number of outliers, i.e. “digital patients” who would actually not be among the living if they were to be dialyzed with these parameters in the real world. (more…)