This essay is triggered by a new GAO report that Medicaid programs have trouble defining and finishing evaluations of their demo programs. I add some context that CMMI has the same issues, and there's never been a critical A-to-Z systematic assessment of the long list of "demo" programs under Coverage with Evidence Development either.
Part One: CMMI
CMMI Has Had Difficulty Proving Its Trials Meet Cost & Outcome Metrics
CMMI is allowed and required to do demos that aim to reduce cost and improve quality. However, of numerous demos conducted, very few have been able to meet this dual metric. Why?
In part, it's hard to have the data, outcomes, comparisons, and statistics. Comparisons to historical control groups based on administrative records can be dicey for obvious reasons. (For disease management, see Al Lewis's book (and blogs) "Why Nobody Believes the Numbers.") CMMI has published huge collections of project summaries of its CMMI demos without having much hard data on results (example). In a video interview, I once heard former CMMI director Patrick Conway say that CMMI staff were stumped by the hurdles of designing and reporting that would meet the CMS Actuary's accounting standards for cost estimates. He makes my point. Maybe a missing link was the lack of enough in-service learning sessions between CMMI design staff and CMS Actuary staff to write the rulebook for outcomes and evaluations ...ahh...before starting the $1B annual investments.
In February 2018, a Health Affairs article on innovation acceleration focused on CMMI largely sidestepped the problems. See Perla et al., but wear your happy-hat. Here.
Part Two: Coverage with Evidence Develoment
Similarly, many of the evidence demo projects by the CMS coverage group under "Coverage with Evidence Development" have a mixed history at best. Here. But it's hard to know: there is NO comprehensive evaluation and lessons-learned written about all the years of CED studies together. Ideally by a fair but critical third party.
Part Three: The New GAO Report on Medicaid Demos
In February 2018 GAO released a report that demo follow up was also badly done in Medicaid. Find the report here, Dive Healthcare here, MedCityNews here.
- Author Phil Galewitz at MedCity noted that even a big Indiana Medicaid demo's reports were tardy - and the demo was designed by Seema Verma who now heads CMS.
click to enlarge |
Conclusion
Do you see what I see?
It seems like whether CMS does innovation demos via (A) CMMI, or (B) Coverage "CED" or (C) Medicaid, it's really really hard to pull off the follow-through and execution, probably in some cases, because the up front planning done at CMMI or Medicaid just didn't have the potential for a successful story arc. Designing good trials is very hard, whether you're a senior expert at NIH, or a famous expert at Harvard, or whether you're on staff at CMMI or a Medicaid plan.
___
Kaiser Family Foundation provides a FAQ and review of CMMI on February 27, 2018; here.
Do you see what I see?
It seems like whether CMS does innovation demos via (A) CMMI, or (B) Coverage "CED" or (C) Medicaid, it's really really hard to pull off the follow-through and execution, probably in some cases, because the up front planning done at CMMI or Medicaid just didn't have the potential for a successful story arc. Designing good trials is very hard, whether you're a senior expert at NIH, or a famous expert at Harvard, or whether you're on staff at CMMI or a Medicaid plan.
___
Kaiser Family Foundation provides a FAQ and review of CMMI on February 27, 2018; here.