One of the factors that has plagued precision medicine has been claims of big benefits that aren't often clearly justified, made numerical, or which seem to defy the reader's own "back of envelope" calculations of benefit. (For example, big claims that promise of future clinical utility and benefits, that may or may not occur at that scale for a range of reasons.)
Recently, I've seen a trend to impressively designed papers with quantitative benefit claims, rather that loosely forecast ones. Here are a few examples.
Swen et al. (2023) Lancet: 12 gene PGx panel in RCT
Swen et al. report in Lancet a multi-institution, cluster-randomized study of 12 pharmacogenetic genes. In some 1500 patients with actionable results, adverse events were 28% in the control group and 21% in the PGx arm. The study has a reasonable panel (12 genes), a rigorous design (cluster-randomized means randomized by clinic location), and good outcomes measurement. This resulted not in vague claims of some kind of massive benefit, but rather, in demonstrated data for a reasonable and solid benefit (28% to 21%).
Tafazzoli et al. (2022) Pharmacoeconomics: Projected Outcomes of Multi-Cancer Screening
In a rigorous pharmacoeconomic model, Tafazzoli et al. report projected outcomes based on real-world stage of discovery today for 19 cancers, projected changes (some larger, some smaller) from Stage 3/4 to Stage 1/2, and average expected impact on costs and survival for patients in a commercial health plan (e.g. per 100,000 patients). A wide range of factors are considered, including costs of false positives, missed cases (between screening intervals), and missed cases (due to sensitivity to detect in Stage 1/2). Of note, there are exceptional valuable and detailed per-cancer tables in the Supplementary Materials. The authors report realistic stage shifts (e.g. demographics of 20% stage IV esophageal cancers today, become 10% stage IV esophageal cancers with population screening; Figure 2.) Overall, costs of stage 3/4 cancers fall by about half, while on a population-wide basis, this is offset by the costs of annual plasma screening for many years. The model becomes more favorable as the costs of cancer care rise and if the costs of per annum molecular screening lower with testing at scale and new tech.
- Besides the invaluable supplementary tables in Taffazoli, see the supplementary "CHEERS" checklist for writing excellence, which I mentioned in an earlier blog.
Wong et al. (2022) Personalized Medicine: Health Plan Coverage vs Major Guidelines
Rather than hand-waving about difficulties with payors, Wong et al. rigorously surveyed major health plan coverage for multi-gene tumor panel testing, compared one-to-one with major guidelines. 71% of 38 plans reviewed were "more restrictive" than current major guidelines, for that plan's effective date. The authors recommend better guideline clarity, and the possibility of state-level policies for guideline-endorsed access.
Bonus Citation:
Lennerz et al. (2023) Clin Chem Lab Med: Diagnostic quality model (includes AI/ML)
I mentioned this briefly since I'm still working through this new article, but it provides an excellent example of rigorous systems thinking for quality management in a lab field that increasing depends on software, machine learning, AI, and integration with the clinical sphere via EMRs and other feedback loops. The paper supports a gap analysis as well as pointing the way toward quality metrics. This is important as a recent Genomeweb survey found that many institutions are aware of substantial precision oncology process gaps, yet very few of them had any kind problem-reduction plan [!].
All four of the papers (Swen, Tafazzoli, Wong, Lennerz) are open access.