Update: The paginated version in Federal Register is, Tuesday, December 10, 2024, as 89 Fed Reg 99340-99579. The word "confusion" appears 13 times, "misunderstanding" 4 times.
https://www.govinfo.gov/content/pkg/FR-2024-12-10/pdf/2024-27939.pdf
Update: Article on using AI on the provider side to fight back against denials. Stat Plus.
#####
On November 26, 2024, CMS released its Part D and Part C (Medicare Advantage) annual rulemaking proposals In an earlier blog, I discussed the proposed new coverage of anti-obesity medications (AOM) like Wegovy.
- In the same rulemaking, Medicare has a section called, "Ensuring Equitable Access to Medicare Advantage Services - Guardrails for Artificial Intelligence."
- It's Section 16 and page 13-14 of the inspection copy, and also pages178-185, 362-372, 724 (creating regulation 422.2) .
- Section 16 points directly to Biden Executive Order October 30, 2023 (EO fact sheet).
- (See also how HHS interprets Section 1557 of ACA, in the context of AI - here.)
- See Healthcare Dive here.
Here's how CMS summarizes its own MA/AI policymaking, on page 13 of the inspection copy:
Given the growing use of AI within the healthcare sector, such as, but not limited to, AI-based patient care decision support tools, vision transformer-based AI methods for lung cancer imaging applications, and AI and machine learning based decision support systems in mental health care settings, we believe it is necessary to ensure that the use of AI does not result in inequitable treatment, bias, or both, within the healthcare system, and instead is used to promote equitable access to care and culturally competent care for all enrollees.
As such, we propose to revise § 422.112(a)(8) to ensure services are provided equitably irrespective of delivery method or origin, whether from human or automated systems.
We also clarify that in the event that an MA plan uses AI or automated systems, it must comply with section 1852(b) of the Act and § 422.110(a) and other applicable regulations and requirements and provide equitable access to services and not discriminate on the basis of any factor that is related to the enrollee’s health status.
###
Perhaps the most interesting thing is something that's not said. The rulemaking is all about AI in Medicare Advantage, and the idea that AI might improve and coordinate health care claims over in Fee for Service Medicare - MACs - seems to be on the far side of the moon as far as CMS is concerned. Thre's reference to following historic legacy Part A & B LCDs, not any reference to following groundbreaking innovative AI claims processing from Part A & B.
###
The rules regarding AI in medical review are part of a somewhat broader section on transparency in Medicare Advantage medical review (Section U), with potentially burdensome regulations. The Biden administration, which proposed these rules, knew it would not be around next spring when they are due to be finalized.
###
AI Corner
- I fed the pages cited above, into Chat GPT 4o, and asked for an analysis.
- I then asked for the likely impression being made on an MA plan that uses AI alot.
- I then asked for a comment letter to CMS from such a health plan.
Medicare Advantage (MA) Guardrails for Artificial Intelligence (AI) and Automated Systems: Policy Analysis
Overview of the Policy
The Centers for Medicare & Medicaid Services (CMS) has proposed significant updates to Medicare Advantage (MA) policies to regulate the use of Artificial Intelligence (AI) and automated systems. These revisions stem from concerns over potential inequities and discrimination introduced by algorithmic decision-making in healthcare. CMS aims to ensure that AI systems used by MA organizations promote equitable access to care, comply with anti-discrimination laws, and deliver culturally competent services.
Under the proposed rule, MA organizations must:
- Ensure equitable delivery of services, irrespective of whether decisions are made by humans or automated systems.
- Maintain compliance with existing statutes, such as section 1852(b) of the Social Security Act, which prohibits discrimination based on health status.
- Regularly audit and review AI and automated systems to identify and eliminate biases.
This policy applies to all systems defined as "automated systems," which CMS describes as any computational processes—machine learning, statistical modeling, or other AI techniques—capable of influencing decisions, outcomes, or data collection.
Definition of "Automated Systems"
The proposed rule adopts a broad definition of "automated systems" to ensure comprehensive oversight:
- Definition: Any system, software, or process that uses computational methods (in whole or part) to determine outcomes, assist decisions, inform policies, collect data, or interact with individuals or communities. This includes AI tools for patient care, risk prediction, diagnosis, and resource allocation.
- Exclusions: Passive computing infrastructure (e.g., web hosting, domain registration, data storage) that does not influence decisions or actions.
This definition emphasizes the potential societal impact of such systems, particularly on vulnerable populations, and aims to mitigate risks of biased or inequitable care delivery.
Key Legal and Policy Implications
As a law professor specializing in healthcare policy, I see this rulemaking as a landmark regulatory effort with both strengths and weaknesses.
Strengths
Addressing Algorithmic Discrimination:
CMS acknowledges evidence showing that AI systems can perpetuate bias. For example, studies indicate that incomplete or biased datasets disproportionately impact marginalized groups. By requiring regular audits, CMS aims to prevent such disparities.
Alignment with Broader Equity Goals:
This policy aligns with the Biden administration's Executive Orders on racial equity and the "AI Bill of Rights." It ensures that technological advancements in healthcare support, rather than hinder, equity and access.
Accountability for Third-Party Vendors:
MA organizations are held responsible for compliance, even when using AI systems developed by third-party vendors. This avoids loopholes where organizations could disclaim responsibility for biases embedded in outsourced tools.
Transparency:
The proposed rules include public posting requirements for internal decision-making tools and coverage criteria, enabling beneficiaries, researchers, and policymakers to scrutinize these systems.
Balancing Innovation with Regulation:
By focusing on compliance rather than banning AI outright, CMS fosters innovation while safeguarding patient rights.
Weaknesses or Concerns
Broad Scope of Automated Systems:
The definition of "automated systems" is intentionally broad, but it may result in overregulation. For example, simpler tools like statistical models could face the same scrutiny as complex AI algorithms, potentially stifling innovation.
Lack of Specificity in Standards:
While the policy emphasizes equity and non-discrimination, it lacks detailed benchmarks for auditing or evaluating AI systems. Without clear standards, enforcement may vary widely, leading to inconsistent compliance.
Resource Burden:
MA organizations, particularly smaller ones, may struggle to meet the rigorous compliance requirements, such as regular audits and transparency mandates. These requirements may disproportionately favor larger organizations with more resources.
Unclear Enforcement Mechanisms:
While CMS outlines penalties for non-compliance, it does not provide detailed enforcement strategies or timelines. This could weaken the rule's efficacy.
Potential for Data Misuse:
Publicly posting internal coverage criteria and algorithms could inadvertently reveal proprietary information, raising concerns about data misuse or intellectual property theft.
Lag in Implementation:
The policy is set to take effect in 2026. Given the rapid evolution of AI technologies, this timeline may leave gaps in regulation, especially as new, unregulated AI tools emerge.
Broader Implications for the Healthcare System
The proposed rule reflects a pivotal moment in the intersection of healthcare, technology, and civil rights. While it tackles urgent issues related to equity and AI, it also signals a shift in the regulatory landscape:
- Precedent-Setting: The rule could influence other federal agencies to adopt similar guardrails for AI in sectors like insurance and public health.
- Global Impact: By aligning with international frameworks like the EU's AI Act, this policy could position the U.S. as a leader in ethical AI governance.
- Challenging the Private Sector: The rule pushes private insurers to prioritize equity, which may require significant restructuring of existing AI systems.
Conclusion
CMS’s proposed guardrails for AI and automated systems in Medicare Advantage plans mark a critical step toward ensuring fairness in healthcare. By addressing algorithmic discrimination and emphasizing equitable care, CMS seeks to align technological innovation with civil rights principles. However, the policy’s broad scope, lack of specific enforcement mechanisms, and resource burdens present challenges that will require further refinement.
As legal scholars, it is our responsibility to critically assess these developments, ensuring that regulatory frameworks strike the right balance between innovation, equity, and accountability. These policies will likely shape the future of AI governance in the U.S., making this a case study for both healthcare and administrative law.
##
How would a Medicare Advantage plan that is ahead of the curve on using AI in medical review, and health policy, react to the new proposed rule?
This health plan is excited about the accuracy and cost savings and efficiency of AI in medical benefit coordination and adjucation. It doesn't want to turn the clock backward.
A Medicare Advantage (MA) plan that is advanced in using AI for medical review and health policy might have a mixed reaction to CMS's proposed rule. While the plan would appreciate the rule's focus on equitable care and transparency, it would likely express concerns about the additional compliance burdens and potential limitations on innovation. Below is a detailed analysis of how such a plan might react and adapt:
Positive Reactions
Validation of AI's Role in Healthcare:
- The proposed rule formally acknowledges the growing importance of AI in healthcare decision-making. By regulating rather than banning AI, CMS legitimizes the use of AI tools in Medicare Advantage, which aligns with the plan's forward-thinking adoption of technology.
Opportunity to Demonstrate Leadership:
- An advanced MA plan would see this as a chance to showcase its AI systems as models of compliance, equity, and efficiency.
- This could differentiate the plan in the marketplace, attracting beneficiaries who value cutting-edge technology and transparency.
Support for Equity Goals:
- The plan may support CMS’s emphasis on reducing bias and promoting equitable access. This aligns with broader healthcare goals and could improve patient outcomes, especially for underserved populations.
- Additionally, plans which are already committed to addressing algorithmic bias might welcome this push as a way to level the playing field across their competitors.
Potential to Reduce Long-Term Risk:
- By addressing biases and complying with regulatory requirements now, the plan could avoid future penalties or reputational damage associated with inequitable or discriminatory AI use.
Concerns and Challenges
Compliance and Auditing Burdens:
- The plan might find the required audits and reviews of AI systems resource-intensive. Regularly assessing algorithms for biases and publishing internal coverage criteria could divert resources from other priorities, such as expanding services or enhancing patient care.
Transparency Requirements:
- Publicly posting decision-making criteria may reveal proprietary information about the plan’s AI tools, potentially exposing trade secrets or intellectual property to competitors.
Balancing Cost Savings with Equity:
- While the plan values AI for its efficiency and cost savings, addressing biases in algorithms may reduce some of those efficiencies. For example, eliminating biased data inputs could require retraining AI systems with more diverse, higher-quality datasets, which is expensive and time-consuming.
Increased Oversight of Vendor Systems:
- If the plan uses third-party AI vendors, it would now be responsible for ensuring those systems comply with CMS regulations.
- This adds a layer of complexity, as the plan must audit vendor algorithms and ensure their compliance. [See also my November 14 blog re: other HHS regulations re hospitals rather than health plans.]
Risk of Overregulation:
- The broad definition of "automated systems" might result in overregulation, creating unnecessary barriers for relatively simple AI tools. The plan might argue for clearer distinctions between different types of AI systems to reduce unnecessary oversight.
Strategic Adjustments
To adapt to the proposed rule, the MA plan would likely undertake the following steps:
Strengthening Compliance Infrastructure:
- The plan would establish robust systems to audit AI algorithms, document decision-making processes, and ensure compliance with anti-discrimination rules. This could include hiring equity-focused data scientists or compliance experts.
Proactive Transparency:
- Rather than resisting transparency requirements, the plan might develop user-friendly ways to communicate how AI tools support equitable and efficient care. This could involve creating public-facing materials that explain the algorithms in plain language.
Collaborating with CMS:
- The plan could actively engage in the public comment process to suggest practical modifications to the rule. For example, it might advocate for clearer standards for identifying and mitigating bias or propose tiered requirements based on the complexity of AI systems.
Enhancing Vendor Oversight:
- If the plan relies on third-party AI tools, it would establish stricter vendor management protocols, including regular audits and requirements for vendors to provide bias assessment reports.
Marketing the Benefits of AI:
- The plan could leverage its compliance with the new rules as a marketing point, emphasizing that its AI systems not only save costs but also promote equitable, culturally competent care.
Piloting New AI Models:
- The plan might pilot alternative AI models or datasets designed to meet CMS equity goals. For instance, using more diverse training datasets or implementing explainable AI tools that provide clear, actionable insights without sacrificing accuracy.
Balanced Position
While the plan may view some aspects of the rule as burdensome, it would likely recognize the long-term benefits of aligning with CMS’s equity and transparency goals. By proactively addressing potential challenges and positioning itself as a leader in responsible AI use, the plan could turn compliance into a competitive advantage.
The key for the MA plan will be to strike a balance between innovation and regulation—leveraging AI’s accuracy and efficiency while meeting CMS’s expectations for equitable, non-discriminatory care delivery.
##
Provide a comment letter on this proposal, written by the SVP for Government Affairs of a large national health plan that is ahead of the curve in using AI to improve efficiency, care coordination, and claims processing.
Administrator Chiquita Brooks-LaSure
Centers for Medicare & Medicaid Services (CMS)
Department of Health and Human Services
7500 Security Boulevard
Baltimore, MD 21244-1850
Subject: Comments on Proposed Rule CMS-4208-P – Guardrails for Artificial Intelligence and Automated Systems in Medicare Advantage Plans
Dear Administrator Brooks-LaSure,
On behalf of [Health Plan Name], one of the nation’s largest Medicare Advantage (MA) organizations, I am writing to provide comments on the proposed rule CMS-4208-P, which includes new requirements and guardrails for the use of Artificial Intelligence (AI) and automated systems in Medicare Advantage plans.
[Health Plan Name] has been at the forefront of integrating AI into care coordination, medical review, and claims processing. We are committed to using AI responsibly to enhance efficiency, improve member experiences, reduce healthcare costs, and promote equitable care. While we appreciate CMS’s focus on equity and non-discrimination, we are deeply concerned that the proposed policies are overly broad, impose significant compliance burdens, and may inadvertently hinder innovation in ways that are ultimately detrimental to beneficiaries.
General Support for Equity Goals
We share CMS’s commitment to ensuring that all MA beneficiaries receive equitable, culturally competent care. AI has tremendous potential to address disparities by analyzing complex data patterns, identifying gaps in care, and enabling earlier interventions. For example, our AI-powered care coordination tools have flagged undiagnosed chronic conditions among vulnerable populations, connecting members with critical services.
We also support CMS’s emphasis on transparency and accountability. Ensuring beneficiaries and providers understand how decisions are made is essential for building trust in AI tools. However, we believe that the current proposal risks undermining the very goals of efficiency and equity that AI is uniquely equipped to advance.
Key Concerns with the Proposed Rule
Overly Broad Definition of "Automated Systems"
The proposed definition of "automated systems" as “any system, software, or process that uses computation...to determine outcomes or assist decisions” is too expansive. This definition includes not only advanced AI models but also simpler tools, such as statistical algorithms or basic decision-support systems.
Recommendation: CMS should refine the definition to distinguish between high-impact AI systems (e.g., machine learning algorithms used for predictive analytics) and low-impact tools (e.g., rule-based decision engines). Tailoring regulatory requirements to the complexity and risk level of the system would prevent unnecessary compliance burdens while maintaining safeguards for equity.
Transparency Requirements Create Intellectual Property Risks
Requiring plans to publicly disclose internal coverage criteria and the rationale behind AI-based decisions risks exposing proprietary information. Many AI tools rely on advanced, trade-secret algorithms developed at great expense. Public disclosure could inadvertently allow competitors or third parties to reverse-engineer these tools, discouraging future investment in innovative technologies.
Recommendation: CMS should implement transparency requirements that protect proprietary information while still enabling beneficiaries and stakeholders to understand decision-making. For example, plans could disclose the general principles and objectives of their AI systems without revealing detailed algorithms or training data.
Compliance and Auditing Burdens
The proposed requirement for ongoing audits of AI systems and data inputs would impose significant administrative and financial burdens on MA organizations, particularly those that already adhere to best practices. These requirements may deter smaller plans from adopting AI tools, reducing competition and innovation.
Recommendation: CMS should adopt a risk-based approach to audits, focusing compliance efforts on systems with a higher likelihood of bias or inequity. Low-risk systems that have demonstrated accuracy and fairness through initial validation should not require frequent re-auditing.
Stifling Innovation
By imposing sweeping and ambiguous restrictions on AI, the proposed rule risks discouraging plans from pursuing innovative uses of technology. AI has delivered measurable benefits to our members, including faster claims adjudication, reduced administrative burdens, and proactive care interventions. Over-regulation may slow progress and harm beneficiaries by delaying these advancements.
Recommendation: CMS should establish a framework that encourages innovation while ensuring accountability. This could include pilot programs or regulatory sandboxes for testing new AI tools under CMS oversight before broader implementation.
Lack of Clarity on Bias Mitigation Standards
While the rule emphasizes the need to address bias in AI systems, it does not specify how plans should measure or mitigate bias. This creates uncertainty and could lead to inconsistent interpretations of compliance requirements.
Recommendation: CMS should collaborate with stakeholders to develop clear, evidence-based standards for bias detection and mitigation in AI systems. These standards should include practical guidelines for collecting diverse datasets and conducting fairness testing.
Proposed Path Forward
Rather than treating AI as a potential liability, CMS should embrace its transformative potential. We urge CMS to:
- Promote Responsible Innovation: Incentivize plans to adopt AI tools that have demonstrated value in improving care delivery and addressing disparities.
- Encourage Industry Collaboration: Work with MA organizations, technology developers, and researchers to create practical guidelines for AI governance.
- Adopt Flexible Regulations: Recognize the diversity of AI systems and tailor oversight to their complexity and impact on beneficiaries.
We also recommend that CMS include a phased implementation timeline to allow plans to adapt their AI systems and compliance processes to meet these new requirements.
Conclusion
[Health Plan Name] believes that AI is a critical tool for advancing the goals of the Medicare Advantage program. With the right regulatory framework, AI can drive efficiency, reduce costs, and improve equity for millions of beneficiaries. We urge CMS to refine its proposal to strike a better balance between safeguarding equity and fostering innovation.
We appreciate the opportunity to comment on this important issue and look forward to working with CMS to shape a forward-thinking policy that benefits all stakeholders.
Sincerely,
[Name]
Senior Vice President, Government Affairs
[Health Plan Name]
##
Minor notes. Above, I abstracted 362-372 on AI in Medical Review; the whole section on that topic, and matching coverage in LCDs NCDs, is 354-372 (Section U, Enhancing Rules for Internal Coverage Criteria.).