The FDA and international regulators have long struggled with how to assess risk/benefit issues. For example, in 2007 the FDA's Robert Temple wrote a thoughtful essay on the disadvantages or difficulties of quantitative risk benefit analysis (article online; pubmed), in response to an article promoting the quantitative risk/benefit approach by Hughes et al.
Update: in September 2014, the Institute of Medicine released an eBook based on recent IOM/FDA conferences on risk/benefit: here.
In making risk/benefit analyses in drug regulation, two immediate problems surface. First, benefits are primary endpoints with carefully constructed statistical frameworks (six months increased survival, p<.01, plus or minus one month). Risks are gnarly - irregularly appearing, unpredictable side effects and adverse events, subject to multiple-measures statistical problems and shifting with the winds of chance. Second, benefits and risks are incommensurate - ten less migraine headaches versus twenty bouts of diarrhea plus fatigue. So...is the risk/benefit favorable? [*] [**]
If Nothing Else, Strive for Clarity
If nothing else, we can start by communicating risks and benefits as lucidly as possible. There are numerous articles on this -
- The European Medicines Agency (EMA) had a 13-page 2008 reflection paper on benefit-risk assessment methods.
- EMA also has a 4-page highlights paper on the topic in 2012 by Zafiropoulos & Phillips: Evaluating Benefit-Risk: An Agency Perspective.
- There is a 2010 book, Benefit-Risk Appraisal of Medicines: A systematic approach to decision-making. F. Mussen et al., Wiley.
- There are special topic articles, e.g. Veenstra et al (2010) A formal risk benefit framework for genomic tests, Genet Med 12:686-93.
FDA Publications and Workshops on Clarity, Structure, and Uncertainty of Risk/Benefit
FDA has been stimulated to push further into the world of structured risk-benefit decisionmaking by specific language urging it to do so in the FFDA (Section 505(d); here; for blog discussion of the FDA's five-year plan, see here for a blog discussion and here for an FDA powerpoint.)
Of particular interest are two publications from the FDA. The first is a 242 page book by Fischhoff et al., on the FDA website, Communicating Risks and Benefits: An Evidence-Based User's Guide. The second is a more concise, rubber-meets-the-road report that appeared last year, on "structured risk benefit" reporting in clinical trials. The FDA webpage for this project is here and the 2013 FDA PDUFA V 15 page report is here.
FDA has been stimulated to push further into the world of structured risk-benefit decisionmaking by specific language urging it to do so in the FFDA (Section 505(d); here; for blog discussion of the FDA's five-year plan, see here for a blog discussion and here for an FDA powerpoint.)
Of particular interest are two publications from the FDA. The first is a 242 page book by Fischhoff et al., on the FDA website, Communicating Risks and Benefits: An Evidence-Based User's Guide. The second is a more concise, rubber-meets-the-road report that appeared last year, on "structured risk benefit" reporting in clinical trials. The FDA webpage for this project is here and the 2013 FDA PDUFA V 15 page report is here.
Payer Decisions Would Benefit from Clarity and Structure, Too
My theory has been that when developers present data and value claims to payers, the payer may often say, "You haven't demonstrated enough clinical utility." I think this is a misphrasing that could be critical. In fact, the payer understands the "efficacy" or the "ideal" claim - for example, when our new genomic test is used, 30% of patients will avoid unnecessary surgery. The payer means, "I'm not convinced of that." It's the "c-statistic" (I use this in quotation marks, since the real c-statistic is already a proper term) - here, the "c-statistic" is "convincingness."
Something is not convincing if there is too much uncertainty. A lot of my recent policy work has been in the field of genomic tests, where tests are expected to demonstrate analytical validity (the chemistry is precise), clinical validity (the chemistry measurements correlate to something worth while, like diagnosing diabetes or choosing a correct drug), and clinical utility (health outcomes are improved.) Now, I've criticized this framework as being OK in the simplest sense, but not really a useful problem solving tool.
Here's the problem. You want to know if the test will be clinically useful, so you answer the question, "Does it have clinical utility?" This is a bit like being told to interview four candidates for a job in your department, and your decision-making tool will be, "Who is the best candidate?" Simply restating the problem you already have, and in the same words, doesn't make you worse off...but it doesn't leverage you forward or make you any better off, either.
"The Fundamental Theorem of Clinical Utility" includes Uncertainty
I've shown slides for six months or more asserting, "the fundamental theorem of clinical utility is comparative. You have to ask is it better: against what comparator, in what units, and with uncertainty." This isn't an original idea - it applies to any comparison, and it's particularly acute in pharmacoeconomics, where there are elaborate guidelines and protocols for how you evaluate, "against what comparator, in what units, and with what uncertainty."
This actually leads to a lot of structured information.
- Against what comparator?
- Typically, the developer wants the most impressive and easiest comparator that's available.
- Payers (or HTAs) want the most realistic comparators, often seeing multiple comparison scenarios, with the most extensive data (population, duration)
- In what units?
- Comparisons always have direct or implied units, even qualitative ones
- Clinical utility units are diverse: survival, less pain, less adverse events, faster diagnosis, same impact but at less cost
- Some units invoke controversy: greater sensitivity (finding tumors that needn't be found?)
- With what uncertainty? I think there are always three different classes of uncertainty to discuss:
- Statistical uncertainty. 50% longer survival, plus or minus 10%.
- Pragmatic uncertainty. Will it work outside the clinical trial? In rural areas? With family MDs? (external validity)
- Conceptual uncertainty. Smith thinks that the Smiedelheim statistical model indicates a predictive test, but Jones thinks the Smiedelheim statistical model only indicates a prognostic test. Smith thinks that appropriate mathematical controls for statistical overfitting are adequate, but Jones is not sure.
Since I've been giving presentations in which I explain and highlight the underappreciated role of uncertainty (through its impact on convincingness), I was really intrigued that the FDA is holding a workshop on the communication of uncertainty.
FDA Builds Uncertainty Explanations Into the Structured Model
In their 2013 PDUFA V white paper, Structured Approach to Benefit-Risk Assessment, they include a table requiring the developer to elaborate on their viewpoint of the uncertainties in the evidence:
They recognize that the developer, and the evaluator, will undertake "conclusions that much be made about each decision factor...these conclusions are a subjective interpretation of the evidence...present the facts, uncertainties, and any assumptions made to address these uncertainties that contribute to the assessment of benefit and risk."
What I'm trying to say about uncertainty for diagnostic test evaluations and negotiations between developers and payers -- is very close to what the FDA seems to be getting at. I'll close with a three-paragraph extract from page 3 of the FDA report:
Update (August 27, 2014)
An interesting 2012 paper looks at different ways of presenting the SAME risk benefit information, which create different results in the reader, assessed on three metrics of "Understanding, Perception, and Persuasiveness." -- Here. [***]
Update (October 8, 2014)
Update: in September 2014, the Institute of Medicine released an eBook based on recent IOM/FDA conferences on risk/benefit: here.
Update: In May, 2015, Congress proposed a minor redline of the FDA's statute regarding use of structured risk/benefit decision-making, here.
[*] In making approval decisions, the FDA looks to a wide variety of external factors in addition to risk/benefit data in the application. For an interesting (subscription only) article on this, see Mary Jo Laffler's "FDA often looks outside the application, review documents show" (Pink Sheet, 3/17/2014).
[**] See also U Washington's health economist Lou Garrison on structured risk-benefit in Health Affairs, 2007, here.
[***] Akl EA et al, 2012, Using alternative statistical formats for presenting risks and risk reductions (review, Cochrane Collaboration).