The white paper emphasizes that price cuts, price ceilings, and reimbursement systems for drugs from Singapore to Germany to England to the US all depend on health technology assessments. (They note, also astutely, that in some countries like Germany the results are painful price ceilings, in the U.S. they are painful copays or other barriers to access.)
In over a decade working full time on healthcare policy, I have seen some pretty dumb logical errors or lame conclusions in health technology assessments. I have also seen some well-written ones. I was tickled to run across two authors who take a pretty acute "the Emperor has no clothes" approach to the health technology asssessment industry. These are Robert E. Ware and Rodney J. Hicks, writing in the Journal of Nuclear Medicine (2011, 52:64S-73S; pubmed).[*] (See also some letters to the editor in the same journal; pubmed).
In their abstract, Ware & Hicks state the following:
- Health technology assessment (HTA) has the objective of providing individual patients, clinicians, and funding bodies with the highest-quality information on the net patient benefits and cost effectiveness of medical interventions. Founded on systematic reviews of the available evidence, HTA aims to reduce bias and thereby provide a more valid evaluation of the benefits of new medical interventions than the primary studies themselves. Competing with the traditional role of medical experts, HTA agencies have gained considerable influence over public opinion and policy. The fundamental tenets of evidence-based medicine mandate that this influence should be used first and foremost for the benefit of patients. Over nearly 2 decades, multiple HTA systematic reviews in many countries have discredited most or all of the evidence pertaining to the ability of PET to improve patient-important outcomes. These determinations have delayed, restricted, and, in many cases, prevented access to this technology, especially by cancer patients. HTA systematic review findings are very much at variance with the opinion of clinicians. Our scrutiny of these reviews, benchmarking them against the core values of science and evidence-based medicine, has revealed errors of fact, inappropriate exclusion of pertinent data, and injudicious appraisal of the clinical relevance of evidence, potentially introducing bias into these reviews and compromising the validity of their conclusions about the net patient benefits of PET. We believe that our findings mandate that the molecular imaging community actively engage institutionalized HTA agencies to ensure appropriate representation of our primary data and adherence to the highest principles of evidence-based medicine.
I was reminded of several scathing and wickedly funny pages in the wonderful book, "Therapeutics, Evidence, and Decision-Making," the concise 2011 monograph on evidence-based medicine by Sir Michael Rawlins, until recently the head of NICE in the UK and currently leader of the Royal College of Physicians. Very early in the text - page 4 forward - he writes about those who would make "levels of evidence" hierarchies the alpha and the omega of assessment. Stating, "[Thomas Kuhn] rejected the idea that there is some algorithm, based on the principles of scientific method, that scientists can use in choosing between one theory and another. Hierarchies of evidence are such algorithms. They attempt to replace judgement with oversimplistic, psuedo-quantitative assessment of the quality of the available evidence." He goes on to say that (real) decision makers have to incorporate judgments, including whether an evidence base is "fit for purpose." I highly recommend Rawlin's book as a great read.[**] And so is the article by Ware and Hicks.
____
[*] The Ware & Hicks article was part of a 2011 special supplement to J Nuc Med on a range of policy topics.
[**] Amazon here. See also his Harvey Oration, which appeared open-access in Lancet and in a longer printed form via the Royal College of Physicians which I can't currently find online.