Monday, September 30, 2024

Amazing: Automatic Podcast with 2 Voices, on USCF JAMA Alzheimer Paper

 See an earlier blog about an important new UCSF paper on Alzheimer's tests, and payers.

Here, I highlight an automatic 12-minute podcast (with two AI voices) created by Google NotebookLM.

NotebookLM lets you make an online folder, upload one or more documents, then generate podcasts, briefing guides, study guides (lots of Q&A), and other resources.

Here is the 12 minute podcast, with a quick intro by me:

https://youtu.be/vAlgMt0HgU8



New UCSF Paper: Payer Plans for Blood-Based ALZ Testing Lag Guidelines and Science

Header:  My readaing of a neww UCSF paper: I would say, "Payer Plans for Blood-Based ALZ Testing Lag Guidelines and Science."

Find the JAMA article by Deverka et al. here and the UCSF press release here.

##

AI Corner (Chat GPT 4o)


Expert Analysis: 

Payer Coverage Lags Behind Scientific Advancements 

in Alzheimer’s Disease Blood Tests

Recent publications from UCSF TRANSPERS researchers in JAMA highlight critical coverage gaps for novel blood-based biomarker tests (BBMTs) that diagnose Alzheimer’s disease (AD). These tests, seen as a promising alternative to more invasive and costly amyloid PET scans and cerebrospinal fluid (CSF) analysis, remain unavailable to most patients due to the lack of public and private insurance coverage. This leaves patients facing out-of-pocket costs exceeding $1,000, limiting access to those who can afford it.

Promising New Tests Without Coverage

The UCSF study outlines that BBMTs can detect Alzheimer’s at earlier stages, potentially increasing the chances of accessing newly approved disease-modifying therapies (DMTs). However, the absence of payer coverage raises concerns about equitable access. According to the researchers, evidence of clinical utility, beyond just diagnostic accuracy, is required to shift the insurance landscape. Payers seek proof that these tests not only identify Alzheimer’s but also lead to meaningful changes in patient care.

Moreover, coverage challenges with current biomarker tools, such as amyloid PET scans and CSF testing, have already shown the complexities faced by clinicians and patients. Without consistent payer policies, regions may experience different levels of access, exacerbating health inequities. UCSF's researchers argue that preemptively addressing these issues for BBMTs is crucial to avoid repeating history.

The Need for Clear Guidelines

The UCSF team, led by Dr. Patricia Deverka, stresses the importance of developing evidence-based payer guidelines for BBMTs. This includes specific clinical indications, such as the use of these tests for symptomatic patients over 60 years old, or as triage tools to determine if further amyloid PET or CSF testing is necessary. Without such guidelines, payers may hesitate to cover BBMTs, leading to further disparities in Alzheimer's diagnosis, particularly among underserved communities.

Equity Implications and Access Challenges

One significant equity concern is that current restrictions on specialized dementia care disproportionately affect minority populations, who already face lower rates of diagnosis. The availability of BBMTs, particularly in primary care settings, could bridge this gap. Yet, without coverage, these tools may remain inaccessible to those who need them most.

Additionally, the JAMA article points out the potential for overuse of BBMTs if introduced without clear evidence or coverage guidelines, which could strain healthcare systems. To avoid misuse, payers might consider covering BBMTs only in cases with validated clinical utility, initially limiting their use to symptomatic patients rather than for widespread screening.

Conclusion

The disconnect between the emerging scientific advancements in Alzheimer’s diagnostics and payer coverage is evident. BBMTs could revolutionize early diagnosis and care, but without payer adoption, they remain out of reach for many. The UCSF team’s call for proactive coverage policies that reflect the evolving landscape of AD diagnosis is timely. As the evidence base grows, so too must payer policies that promote equitable access to these critical diagnostic tools.


##

See a Google NotebookLM Automatic Podcast (two voices), 13 minutes:

https://youtu.be/vAlgMt0HgU8

##

(Back to Chat GPT) 

Can you confirm that the science article, plus press release, do in fact support the point we are emphasizing (disconnect between science and coverage).   Will the authors feel you (or we) draw fair conclusions albeit as our own appraisal of their work.

##

Yes, the points emphasized in your article—specifically the disconnect between scientific advancements and payer coverage—are well-supported by both the JAMA article and the UCSF press release. Here's how:

  1. Press Release Support:
    The UCSF press release clearly underscores the potential of blood-based biomarker tests (BBMTs) for Alzheimer’s disease to offer a simpler and more accessible alternative to current diagnostic methods. It explicitly states that these tests are not yet covered by insurers, leaving patients with high out-of-pocket costs, which impedes equitable access, especially for underserved populations​. This indicates a direct disconnect between the availability of scientific advancements and the lack of corresponding insurance coverage.

  2. Science Article Support:
    The JAMA article elaborates on the criteria payers typically require for coverage—analytic validity, clinical validity, and clinical utility. While it acknowledges the promise of BBMTs, it emphasizes that payers are currently hesitant to cover these tests due to the need for more evidence, particularly around clinical utility. The article also highlights how the lack of coverage for existing biomarker tests (amyloid PET scans, CSF tests) has led to inequities in diagnosis and access​. This further reinforces the gap between scientific progress and payer adoption.

Author Reactions to the Conclusions

The conclusions you are drawing—that there is a disconnect between science and payer coverage—are consistent with the data and concerns raised by the authors. Both the press release and the JAMA article highlight payer hesitation due to a lack of proof of clinical utility, despite the promising science behind BBMTs. By pointing out this gap in your appraisal, you are not overstating their conclusions but rather expanding on the implications of their work from a policy perspective.

The authors are likely to view this assessment as a fair extrapolation of their work. You are aligning with their emphasis on the need for clear coverage policies, evidence development, and equitable access, while making the case that current payer policies lag behind the advancing science.

Thus, your analysis remains well within the bounds of their findings and provides a reasonable interpretation of the implications regarding payer coverage gaps.




Sunday, September 29, 2024

Amazing: Google Notebook Turns My Crosswalk Blog into AI Two-Voice Podcast

...with some errors.

I've been hearing that Google Notebook now has a feature where it can generate an automatic, two-voice audio podcast by AI, from any article or documents you feed it.  (Here)

A friend used by CMS crossswalk/gapfill article - quite a long one - and produced a two-voice podcast from it.  I've loaded YouTube with a second version (from the final article version).


Is It Accurate?  Is It Worth 30 Seconds?

Is it completely accurate, and does it get all the emphases correct?  NO.   

Is it worth listening to, for 30 or 60 seconds, to get the look and feel?   For me, DEFINITELY.


Find It

I loaded the Google AI Podcast up into YouTube, with a brief intro by me.

https://youtu.be/DPq5QBNBne4

You can also find the shorter and longer podcasts as a Transcript.   (The first is styled as Adam and Brenda, the second as Charles and Deanna).

TRANSCRIPT (#2, LONGER) https://docs.google.com/document/d/1ZOcgKZXzZByOqNBLY01aFZbaPe6wyuVV/edit?usp=sharing&ouid=110053226805181888143&rtpof=true&sd=true


TRANSCRIPT (#1, SHORTER)https://docs.google.com/document/d/1UGA4rhE-Ym4D91dEKjjXT0I6wXNVFlbh/edit?usp=sharing&ouid=110053226805181888143&rtpof=true&sd=true




Thursday, September 26, 2024

CMS New Lab Prices: Proposals for 2025, Finals for the 2024 Gapfill

Three news items this morning.

  • Congress passed a delay in PAMA repricing until 2027 - with reporting of 2019 prices in 1Q2026. (Also here).
  • CMS posted preliminary crosswalk/gapfill pricing for 2025 codes - public comment til October 25.
  • CMS posted final gapfill pricing for this year's cohort of gapfill codes.
Find the preliminary prices and the final MAC Gapfill prices at the CMS lab reimbursement page:


Scroll down for for Meeting Notice, Agenda, and Other Important Materials (the latter of which holds new prices and final gapfill prices).

New Code Proposed Pricing 

CMS Proposed Pricing runs from line 3 to line 134, so 132 codes.  One code, 0473U, has been removed from the pricing process because it is an ADLT code now.   This is the Tempus CDx code, priced at $4500 effective July 1.

How CMS Thinks

Of the remaining 131 codes, I manually tallied 37 for gapfill, 95 for crosswalk.   Of the 95 crosswalk codes, 88 are simple (to a single crosswalk x 1).   Six are complex crosswalks (to added-up codes or a fractional code).   Of the six complex crosswalks, only one involves a fractions, which is 0468U.  The panel unanimously voted for 0003M x 0.5, and CMS agreed, making this the only code to receive a "fractional" crosswalk.

How Often CMS Disagrees

CMS disagreed with the advisory panel majority recommendation 49 of 131 times, or 37% of the time.  The comments in these cases reflect that CMS staff made a close review of coding details and methods.

FOIA: Because it can be cryptic why and how CMS disagrees with the majority panel recommendations, I requested FOIA information in February 2023 on the November 2022 final decisions.   Today I got a FOIA response (020320237062) that 3 spreadsheets exist and contain more information on CMS's search for crosswalks and development of rationales, but CMS declined to release them to me.  

(These CLFSology counts, shown above, are based on my personal manual review of the large spreadsheet.  I'm sure they're close but don't guarantee perfection.)

Alzheimer Tests.  CPT proposed a number of new 8xxxx series Alzheimer codes, such as beta-amyloid 40, beta-amyloid 42, total Tau, etc.   CMS generally proposed the $17 crosswalk 83520 "other immunoassay."  A PLA code for tau, 0479U, also landed there.   The expert panel and public comments generally recommended around $130 (which is 0358U/2, Fujirebio CSF).   There are burdensome challenges to achieving Alzheimer proteomics, and people have tried for 20 years, so the newly accurate tests, especially from plasma,  represent  substantial proteomics advances and breakthrough changes for Alzheimer management (Jack et al. 2024).

Turning to Final Gapfill Prices

CMS also released final Gapfill prices for 28 codes, based on the Spring proposed prices and subsequent public comment.   Usually, few gapfill prices change between spring and fall.

The "Rationales" page has 5 columns (B through F), of which a remark reveals that Column D is the Novitas/FCSO comment column.    You quickly see that columns B, C, and F are identical, and likely reflect the MolDx system rationales (Palmetto, Noridian, WPS).   That leaves Column E, which likely represent the NGS MAC comments.

Gapfill: How Novitas Thinks

Novitas remarks for 16 codes, "No comments received," suggesting that the other codes did have comments received.   

A bit scarily, for codes 81457, 81458, Novitas writes that, "Based on discussions/feedback/comments from Illumina, CAP, AMP, ACLA and AdvaMed; we determined that a reduction from our original payment determination of $2919.60 to $1759.60 is warranted. This revision is based on codes 81455 and 81277 and is in line with the current best practices and industry standards."  (For 81458 they write "reduction from $3211.56 to $2919.60").  For new LBx codes 81462, 81463, 81464, Novitas acknowledges that LBx requires more resources but "the cost of this sereis of cfDNA codes is not well characterized, therefore, currernt pricing is maintained."

Gapfill: How MolDx Thinks

Assuming columns B,C,F, which are identical, reflect the comments of MolDx MACs, their extensive comments should provide some insight into how they think, and across diverse situations, but then again, there are >100 rows, so it will take some time to extract any consistent principles.  

At the bottom of this blog, under AI Corner, I give an imterim Chat GPT analysis of the rationales provided by MolDx and CMS.

Gapfill: How MAC Gapfills Changed

Five prices changed, with absolute increases ranging from $98 to $2662.  Percent increases over the proposed price as base price, were 10% to 573%.    Figure below; click to enlarge.  

ADLT: The code with the huge change, 0356U, is NavDx Naveris for ddPCR for HPV-related cancers (minimal residual disease) and CMS records the new ADLT price here but CMS changed the MAC rationale for gapfill as "NA" since it is priced by ADLT policy.  $827 to $3500.  The ADLT price was set on 3/18 effective 4/1, so it's an artefact it's on the gapfill list at all.

Alzheimer:  The Precivity AD blood test 0412U got a healthy bump from $339 to $750, the rationale based on similarity to (0063U, 0247U, 0322U, 0384U).

click to enlarge

click to enlarge

###
This week, I had a lot of fun with AI Corner.  Here's what you can find below.

AI Corner:  How MolDx Thinks
AI Corner:  How CMS Thinks, When It Disagrees
AI Corner:  How to Write a Pricing Comment Letter to CMS
AI Corner:  How CMS Thinks, When it Disagrees on "Gapfil"
AI Corner:  How a CMS Gapfill Expert Describes His Job
AI Corner:  How a Gapfill Expert Describes His Headaches & Pitfalls
AI Corner:  How CMS Staff Prepare for Meeting with Angry Stakeholders 


###
AI Corner
How MolDx Thinks

I gave Chat GPT 4o all of the several dozen rationales that MolDx used to explain its price-setting.  Here is the response from Chat GPT:
____

MolDx rationales. The most frequent terms in the rationale descriptions include "price," "based," "service," "similar," "comparable," and "existing," suggesting a strong emphasis on the following key patterns in how Medicare MolDx sets prices for molecular lab tests:

  1. Comparison to Existing Services: The pricing often references similarity to existing services, suggesting that new tests are frequently evaluated in relation to established codes and methodologies. This ensures that new services are aligned with already established pricing structures (e.g., "comparable services").

  2. Price Justification Based on Service Complexity: Many rationales are based on the complexity of the service. Tests described as offering additional or more complex features compared to current offerings are given appropriate adjustments in price, but only if justified by the scope and complexity of the service.

  3. Genomic and DNA Testing: The frequent mention of "DNA" and "genomic" suggests a focus on tests related to genetic or molecular diagnostics, where pricing might be informed by the complexity of the analysis and the amount of genomic information provided.

  4. Use of Existing Laboratory Fee Schedule (CLFS): The term "CLFS" (Clinical Laboratory Fee Schedule) indicates that the Medicare MolDx program often refers to existing price frameworks in making determinations for new tests, likely aiming for consistency across similar diagnostic services.

These patterns indicate that the MolDx program prioritizes comparability with existing tests, complexity of the new test, and uses established frameworks like the CLFS when setting prices for new molecular lab tests.


AI Corner: 
How CMS Thinks When It Disagrees

I gave the 49 CMS disagreements with its expert panel, to Chat GPT for discussion.  I don't claim the results are suprrising, but here's the result.

From the frequency analysis of the rationale descriptions where CMS disagreed with the expert panel’s recommendations, several key patterns emerge:

  1. Crosswalk Disagreements: The term "crosswalk" appears frequently, suggesting that CMS often disagrees with the panel on how new tests should be priced relative to existing tests. Crosswalks are used to establish equivalency between new and existing tests for pricing, and CMS seems to place a different emphasis on how closely a new test aligns with a specific code than the panel does.

  2. Emphasis on CMS Judgment: The term "CMS" is frequently mentioned, indicating that CMS is explicit in asserting its judgment over the panel’s recommendations. CMS consistently provides its rationale when it diverges from expert advice, signaling transparency in decision-making.

  3. Existing Codes and Test Comparability: CMS's rationale often revolves around the existence of a comparable test or code. When CMS disagrees, it frequently states that there is a more appropriate or similar code that better fits the test in question, suggesting CMS prioritizes consistency with existing pricing models.

  4. Precision in Test Descriptions: CMS often disagrees with recommendations when the test description does not explicitly match the proposed crosswalk analysis or when key details like risk scores are missing. This highlights a strong focus on detailed descriptors and clinical content as critical factors in pricing decisions.

In presenting its disagreement, CMS appears to focus on maintaining price consistency with existing codes and requiring a strong match between the test’s description and the proposed pricing mechanism. These disagreements are framed as corrections to ensure appropriate alignment with established Medicare pricing principles, which reflects CMS’s priority for precise and comparable test categorization.

How should a lab frame its comments to CMS?

If a lab prefers the expert panel’s recommendation or other public recommendations over CMS’s decision and seeks to challenge CMS’s rationale, the lab should carefully structure its comment letter to maximize its effectiveness. Here are steps to prioritize and frame the letter:

1. Use Clear, Evidence-Based Arguments

  • Directly Address CMS's Rationale: Begin by acknowledging CMS’s stated rationale for disagreement, and then systematically counter the reasoning with evidence. For example, if CMS disagrees on crosswalk selection due to lack of comparability, provide detailed data that demonstrates how the proposed test aligns with the existing test or code in question.
  • Present Data and Clinical Evidence: Use published studies, clinical guidelines, and comparative performance data that show the test's complexity, efficacy, or uniqueness compared to existing codes. Ensure that your arguments are backed by solid, peer-reviewed scientific evidence to substantiate the panel's recommendation.

2. Highlight Support from the Expert Panel and Public Comments

  • Reference Expert Consensus: Emphasize that the expert panel, comprised of individuals with deep knowledge of molecular diagnostics, voted in favor of a different pricing strategy. Highlight their expertise and support their recommendation, arguing that CMS should align with the majority view of experts.
  • Leverage Other Public Comments: If other stakeholders, including physicians, patient advocates, or industry leaders, also support the panel’s recommendation, mention this. Public consensus can underscore the broader impact of CMS’s decision and indicate alignment with real-world needs.

3. Address CMS Priorities

  • Align with CMS Goals: Frame your arguments within the context of CMS’s stated goals, such as ensuring value-based care, affordability, and access to important tests. If CMS's pricing decision hinders access to important molecular diagnostics or makes it unaffordable for patients, clearly articulate this impact and align the lab’s position with Medicare's mission to promote patient access to high-quality care.
  • Demonstrate Economic Impact: If CMS’s pricing decision negatively affects the economics of providing the test (e.g., by making it unsustainable for labs to offer the test at the CMS-set price), include financial analyses or business models that show the decision’s long-term adverse impact on access to the test.

4. Focus on Test Differentiation

  • Clarify the Test’s Unique Value: Explain why the test in question is different from similar codes that CMS may have suggested. Highlight any distinctive features (e.g., precision, scope of genomic analysis, or clinical utility) that justify the higher price or alternative pricing mechanism.
  • Argue the Clinical Necessity: Emphasize the test’s clinical value, including its potential to influence critical treatment decisions. If the test offers unique benefits (e.g., improved diagnosis, precision medicine capabilities), make sure these are prominent in your argument.

5. Follow CMS’s Formal Comment Process

  • Use the Appropriate Channels: Submit the letter within the formal comment period, ensuring it adheres to CMS's submission guidelines. Include references, attachments, and supporting documentation where applicable.
  • Engage Influential Stakeholders: If possible, encourage others who benefit from or use the test (such as clinicians or advocacy groups) to submit comments that align with your letter to provide additional weight.

6. Propose a Compromise or Adjustment

  • Offer Alternatives: If feasible, propose a compromise or an alternative crosswalk that still aligns more closely with the expert panel’s recommendation while addressing some of CMS’s concerns. For example, suggest an alternative price point or model that balances access and cost-effectiveness.

Sample Structure for the Comment Letter:

  1. Introduction: Briefly state the purpose of the letter (e.g., the lab’s position on CMS’s decision) and acknowledge the importance of the issue.
  2. Acknowledgment of CMS’s Rationale: Summarize CMS’s reasoning and then clearly state the lab’s disagreement.
  3. Evidence Supporting the Lab’s Position:
    • Clinical/technical data showing the test’s value and differentiation.
    • Expert panel consensus and other supporting public comments.
    • Financial analysis showing the economic impact of CMS’s decision.
  4. Alignment with CMS’s Broader Goals: Connect the lab’s preferred position to CMS’s mission (e.g., improving patient care access, supporting innovation).
  5. Conclusion and Call to Action: Request that CMS reconsider its decision and adopt the expert panel’s or public recommendation.

By presenting an evidence-based, well-reasoned argument that aligns with CMS’s broader goals, the lab can increase the likelihood that its comment will be seriously considered.


##

Another Look at Disagreements:
The Gapfill Disagreements 

Earlier, we asked AI to assess 49 cases of all types where CMS disagreed with the expert panel.

Here, we turn to 17 cases where CMS recommended gapfill.  In about 13, CMS did so in concurrence with earlier panel recommendations for gapfill.  In these cases, the rationale was boilerplate ("We agree with the panel...) except in one case (where CMS mentioned that "aptamer protomics" had no CLFS precedent for pricing.)    

Therefore, I filtered the CMS gapfill recommendations to only those 14 cases where CMS DISAGREED with the panel AND recommended gapfill.  Here:

Based on the pasted rationales where CMS disagreed with the expert panel and recommended gapfill, we can draw several insights about CMS's decision-making process when it diverges from the expert panel recommendations.

Key Themes in CMS Disagreements:

  1. Focus on Resource Estimation by MACs:

    • One of the most frequently cited reasons for opting to gapfill is the ability for Medicare Administrative Contractors (MACs) to more accurately estimate the resources needed for a given test. CMS emphasizes this point repeatedly, suggesting that when there is uncertainty or complexity in determining a crosswalk, gapfill allows MACs to assess the actual costs associated with the test in their jurisdiction.
    • Example: "By gapfilling this code the resources used in this code can be better estimated by a Medicare Administrative Contractor (MAC)."
  2. Lack of Public Input:

    • In several cases, CMS highlighted the absence of public input as a reason for maintaining a gapfill decision rather than following the panel's recommendation. This suggests that CMS weighs stakeholder input heavily, and in its absence, it defaults to a conservative gapfill approach, allowing MACs to assess pricing based on regional realities.
    • Example: "CMS did not receive any public input on this code and has decided to maintain the gapfill decision made last year."
  3. Specificity of Test Methods:

    • CMS frequently disagrees with crosswalk recommendations when they believe the proposed crosswalk doesn’t adequately capture the specific methods or technologies used in a new test. This indicates that CMS prioritizes a detailed understanding of the test's methodology and prefers gapfilling when there are significant differences between the proposed crosswalk and the test in question.
    • Example: "The proposed crosswalk does not analyze cell-free DNA."
  4. Minority Panel Recommendations:

    • In several cases, CMS agrees with the minority opinion within the expert panel, indicating that while it values majority consensus, it is willing to deviate when it believes the minority position is more aligned with CMS’s goals of cost containment and resource estimation.
    • Example: "CMS agrees with the minority panel recommendation to gapfill this code."
  5. Avoidance of Redundancy:

    • CMS sometimes points out redundancy in the expert panel's crosswalk suggestions. When it sees that suggested codes overlap or create inefficiencies in pricing (e.g., combining multiple codes), CMS opts to gapfill, allowing for a more tailored and efficient approach.
    • Example: "CMS believes there is too much redundancy across the suggested Crosswalk to codes to simply be able to add them up."
  6. Historical Precedents and Consistency:

    • CMS often refers to its own past decisions as a basis for disagreement, indicating a focus on maintaining consistency across tests that use similar technologies or methodologies. This helps ensure uniformity in pricing and resource estimation for similar tests over time.
    • Example: "CMS has crosswalked tests involving DNA methylation differently from mRNA analysis."

Assessment:

When CMS disagrees with the expert panel, its rationales seem to hinge on several core principles:

  • Accurate Resource Estimation: CMS believes that the gapfill method allows for a more flexible and accurate determination of resources, especially when a test involves complex methodologies or when crosswalks don’t align well with the test description.
  • Consistency and Precedent: CMS often seeks to maintain consistency with previous decisions, using past rulings as a guideline for current choices.
  • Preference for Gapfill in the Absence of Clear Data or Consensus: When there is uncertainty—whether due to a lack of public input or ambiguities in the panel's recommendations—CMS tends to fall back on gapfill to allow for further analysis and regional adjustments.

Discussion:

CMS’s preference for gapfill over expert panel crosswalk recommendations in certain cases seems to stem from its desire to avoid blanket pricing decisions when it feels that MACs are better suited to determine the local costs associated with a test. This approach allows for flexibility and adjustment based on real-world data. Moreover, CMS’s frequent reference to past decisions and consistency suggests that it values long-term stability in pricing over potentially more aggressive changes suggested by the panel.

By frequently citing resource estimation by MACs, CMS emphasizes the importance of regionalized cost analysis, which may reflect the variability in lab testing across different regions of the country. This approach also allows CMS to remain cautious and conservative in pricing, especially when faced with incomplete information or uncertainty.


AI Version: How a CMS Gapfill 
Policymaker Describes His Job

You are a senior policy official at CMS and you’ve been working on these gapfill pricing analysis and issues for 10 years. You need to train Maria, who is new to this assignment and will be taking over for you in a few months. Give her an initial talk about the priorities of the gap process, and how CMS approaches it with new codes and with the recommendations of various public parties.

Hi Maria,

As you're preparing to take over the gapfill pricing process, I want to give you a solid foundation on how CMS approaches these issues and what our priorities are. The gapfill process is a critical part of CMS’s role in setting fair and sustainable pricing for new molecular diagnostic tests, especially in a constantly evolving space like laboratory diagnostics.

1. Purpose of the Gapfill Process

The gapfill method allows us to price new lab tests based on the resources needed to perform the test, as assessed by the local Medicare Administrative Contractors (MACs). When we don't have a clear equivalent test or “crosswalk” to base pricing on, gapfill lets us gather data over time to arrive at a fair and accurate price.

The gapfill process is different from crosswalking, where we match a new test to an existing code with similar complexity or use, and then adopt its price. Gapfill is used when no such direct match is available, allowing us to start fresh.

2. CMS Priorities in the Gapfill Process

CMS approaches gapfill pricing with several key priorities in mind:

  • Accuracy and Fairness: We want to make sure that the price reflects the actual resources needed to perform the test. This includes technical factors, such as the type of analysis involved (e.g., DNA vs. RNA), the labor required, and the technology used.

  • Cost Containment: While we aim for accuracy, we also have to ensure that prices are sustainable. We are balancing innovation in molecular diagnostics with the responsibility to prevent runaway costs in the healthcare system.

  • Consistency Across the Board: We aim to keep our pricing consistent, not just with past decisions but also across similar tests. This ensures fairness and stability for laboratories, payers, and patients alike. For example, if we’ve already gapfilled or crosswalked a similar test in the past, we aim to stick close to those decisions unless something has significantly changed.

3. Approaching New Codes

When a new code comes in, the process starts with evaluating whether a crosswalk exists. This is when we ask: Does this test have a near equivalent that is already priced? If yes, we lean toward crosswalking. But when there’s no clear match—either because of new technology or complexity—we turn to the gapfill process.

Why Gapfill? Gapfill lets the MACs get involved, because they are closer to the realities of running these tests in their regions. They can assess what labs are charging and what the real resource needs are. This ensures that the pricing reflects what's happening on the ground, not just in theory.

4. Public Input and Advisory Panels

We value public input and the recommendations from our expert advisory panel. Their expertise and stakeholder perspectives are invaluable. However, you’ll notice that CMS doesn’t always agree with the majority opinion.

  • When We Agree with the Panel: These are usually cases where the panel’s recommendation aligns well with the existing framework, past pricing decisions, and the principles we follow. In these cases, you’ll see that the rationales are relatively simple, as there’s a clear consensus.

  • When We Disagree with the Panel: This often happens when CMS has concerns about the sustainability of the panel’s recommendation or when we believe there’s a better way to determine pricing through gapfill. For example, if the panel suggests a crosswalk but CMS believes the test requires a unique approach, we might opt for gapfill. Here, our rationale tends to be more specific, citing factors like insufficient data or public input, or the need for MACs to provide a more localized resource estimate.

5. Engaging with Public Parties

Stakeholders, from laboratories to clinicians to advocacy groups, are part of the process, and we review their input carefully. While we don’t always follow their recommendations, their perspectives provide context on the test's value and usage in the real world. If we don’t receive much public input, as happens sometimes, it can push us to default to gapfill, as we’d prefer more data to make a final crosswalk decision.

6. Balancing Innovation and Affordability

One of the trickiest parts of the job is balancing the need to support innovation in diagnostics with the goal of keeping Medicare costs in check. Laboratories are constantly developing more sophisticated and personalized tests, but those tests come with costs. By using gapfill, CMS can take the time to evaluate the test properly, ensuring the price reflects both the clinical value and the actual resources required without prematurely inflating costs.

Conclusion

As you take on this role, keep these priorities in mind. We are here to ensure that pricing is fair, accurate, and sustainable, both for CMS and for the broader healthcare system. Always look for consistency with past decisions but be open to adjusting as technology advances and new data comes in. And finally, remember the importance of MACs and public input—they play a vital role in making sure our pricing reflects the real-world environment in which these tests are performed.

I’ll be here to help you with specific cases and guidance as you take over, but these principles should guide you through the bulk of the decisions you’ll need to make.

Good luck!

##

She asks, what are some the most likely pitfalls and headaches in the job?

##

Great question! There are definitely some challenges and potential pitfalls that come with managing the gapfill pricing process. Here are some of the most common ones to watch out for:

1. Incomplete or Insufficient Data

A major pitfall is not having enough data to make informed decisions. Whether it’s the lack of public input, limited clinical data, or missing financial impact information from laboratories, this can put you in a position where decisions feel more speculative than evidence-based.

  • Headache: You might need to make a call to gapfill when it would be easier to crosswalk, or vice versa, but you just don’t have the data to confidently go one way or the other.

How to manage: Always advocate for more data collection and better stakeholder input. Push for MACs and public parties to provide detailed resource usage information.

2. Pressure from Stakeholders

There can be significant pressure from various stakeholders—laboratories, clinicians, patient advocacy groups, and even your own advisory panel. Each party has its own interests, and sometimes these conflict with CMS’s goals of cost containment or fairness.

  • Headache: You may face pushback from labs or advocacy groups who believe that gapfilling a test undervalues it. On the other hand, there can be pressure from cost-conscious entities like Medicare contractors who think a panel recommendation is too expensive.

How to manage: Stick to CMS principles and maintain transparency. Clearly communicate the reasons behind your decisions, whether you agree or disagree with a panel or stakeholder. Being clear about CMS’s broader goals can help smooth out tension.

3. Handling Disagreements with the Expert Panel

Disagreeing with the majority recommendation of the expert panel can sometimes be politically tricky. The panel members are experts in their fields, and openly diverging from their recommendations could lead to questions about CMS’s judgment.

  • Headache: It may feel uncomfortable to go against the experts, particularly if there’s public scrutiny or strong advocacy from the panel’s side.

How to manage: Be clear and specific in your rationale. CMS is focused on sustainability, consistency, and cost management—these priorities should drive your decision-making. When you disagree with the panel, it’s important to justify the decision based on solid, defendable principles, like resource estimation by MACs or inconsistency with past crosswalks.

4. Complexity of New Technologies

Molecular diagnostics and other lab tests are becoming more complex, especially with the rise of personalized medicine and next-gen sequencing technologies. The rapid pace of innovation can make it difficult to assess where a new test fits into the existing pricing framework.

  • Headache: New tests may not align easily with any existing code, making both crosswalk and gapfill difficult. You might need to rely on MACs more than expected, and this can slow down the process.

How to manage: Stay up to date on emerging technologies, and lean on expert opinions, even if CMS ultimately doesn’t follow the majority. Engaging with industry experts and MACs early on helps to get a clearer sense of the test’s complexity and where it might fit.

5. Balancing Fairness and Cost Containment

Finding a balance between pricing that encourages innovation and ensuring that Medicare doesn’t overpay for new diagnostics is always a challenge. Tests that are very innovative or groundbreaking often come with higher costs, and setting a price too low could discourage their use, but setting it too high can lead to excess costs.

  • Headache: If a test becomes too expensive, there could be pushback from payers and CMS, but if it's underpriced, labs might decide it’s not worth offering the test, reducing access for patients.

How to manage: Use gapfill to your advantage in these cases. It allows for a flexible, data-driven approach to set pricing over time, rather than rushing to a crosswalk decision that might not fully account for the test’s value.

6. Administrative Burden and Timeline Delays

The gapfill process can be slow and cumbersome, especially when relying on MACs to assess resources and costs over time. Gathering the data needed to justify a final price can take longer than anticipated, and delays can frustrate stakeholders.

  • Headache: If timelines stretch too far, there can be frustration from labs, clinicians, and even patients waiting for the test to be priced and available.

How to manage: Set clear expectations early on about the time frame for the gapfill process. Regular check-ins with MACs and stakeholders can help keep things moving along.

7. Dealing with Reconsiderations and Appeals

Some codes go through multiple rounds of reconsideration, especially if there’s disagreement between CMS and the expert panel or public stakeholders. Revisiting the same code multiple times can be frustrating and may lead to perceived indecision or inconsistency.

  • Headache: It can be frustrating to re-evaluate codes when there’s already been a lot of work done, particularly when the same arguments are brought up over and over.

How to manage: Document decisions thoroughly so that if a code comes back for reconsideration, you have a clear record of why CMS made the initial decision. This also helps justify CMS’s consistency in its approach.

Conclusion

This job requires a careful balance between clinical innovation, cost control, and fairness. You’ll likely find that the pressure from various parties and the complexity of new technologies are the biggest challenges, but by staying grounded in CMS’s core principles—accuracy, sustainability, and transparency—you’ll be able to navigate these effectively. Just keep your cool, rely on data, and make decisions based on the best information available, even when you face uncertainty or pushback.

We’ll go over specific cases and best practices over the next few months, but this overview should give you a sense of the common issues and how to handle them.

####

CMS Prepares for Meeting with Angry Stakeholders

You continue to be the CMS policymaker.   Several large organizations are upset that you (CMS) make different choices than the Federal advisory panel (lab expert advisory panel) around 40% of the time.   Disconcertingly, they also have data that in the recent round of over 100 decisions, you agreed with the advisory panel 70% of the time on low cost codes, and disagreed 70% of the time with high cost codes (>$1000).   Your boss, the senior director of the Center for Medicare, will attend the meeting but expects you (the director responsible for lab pricing).  Discuss your thinking and preparation for the challenging meeting.  

Preparation for the Meeting: Acknowledging the Core Concern

As we prepare for this meeting, we need to acknowledge that one of the most critical concerns for stakeholders is why CMS disagrees with the advisory panel of experts about 40% of the time, particularly for high-cost codes. This issue, especially the trend of more disagreements for higher-priced tests, will likely be a central focus of the discussion. We need to approach this with transparency and an emphasis on CMS’s broader mission.

1. Review the Data and Decisions

  • Analyze the Data: Before the meeting, we need to thoroughly review the data on the recent round of over 100 decisions. Specifically, we need to address the fact that CMS agreed with the panel about 70% of the time on low-cost codes and disagreed 70% of the time on high-cost codes (>$1000). This discrepancy will be central to the conversation, and we need to be ready to explain it.
  • Break Down by Complexity: Often, the high-cost codes involve more complex and cutting-edge technologies. These technologies come with higher uncertainty regarding clinical utility and cost-effectiveness, which makes CMS more cautious about immediately adopting the advisory panel’s recommendations without further analysis.

2. Acknowledge and Justify CMS’s Broader Mandate

  • Emphasize CMS’s Responsibility for Cost Containment: While we greatly value the expert panel’s input, CMS has a broader mandate to ensure the sustainability of the Medicare program. The fact that disagreements are more common on high-cost tests reflects this responsibility. High-cost tests, if priced too high, can significantly impact Medicare's overall spending.
  • Clinical Value vs. Fiscal Responsibility: The advisory panel focuses on clinical value, which is critical, but CMS must also factor in the downstream impact of these tests on the entire healthcare system. For high-cost tests, we are more likely to use gapfill because it allows Medicare Administrative Contractors (MACs) to assess the real-world resources involved, which is crucial when the stakes are higher.

3. Address the Discrepancy Between Low- and High-Cost Code Decisions

  • Why CMS Agrees More Often on Low-Cost Codes: For low-cost codes, the financial risk is lower, and CMS is more likely to follow the panel’s recommendation because the potential impact on the Medicare budget is relatively minor. In these cases, the panel’s expertise and CMS’s cost-containment goals are often more aligned.
  • Why CMS Disagrees More Often on High-Cost Codes: For high-cost tests, CMS takes a more cautious approach. We recognize that the advisory panel evaluates the clinical merits of these tests, but CMS also has to consider long-term budgetary impact, especially if these tests could be used broadly across the Medicare population. This is why CMS disagrees more often with the panel on higher-cost codes—we need to ensure we’re not inadvertently setting prices that could destabilize the system.

4. Explain CMS’s Approach to New Codes

  • Gapfill as a Tool for Managing Uncertainty: The gapfill method allows CMS and MACs to gather data on the real-world costs and clinical utility of new tests. For high-cost tests, especially those with emerging or complex technologies, there is often more uncertainty about the resources required. Using gapfill gives CMS the flexibility to reassess and adjust pricing once more data is available, ensuring we’re not over-committing to a price too early.
  • Crosswalk vs. Gapfill for High-Cost Tests: Crosswalking a high-cost test to an existing code can sometimes lead to inflated prices if the comparison isn’t perfect. Gapfill allows MACs to provide a more tailored assessment, which is crucial for expensive and innovative tests.

5. Discuss Public Input and Expert Recommendations

  • Acknowledging the Value of the Expert Panel: It’s important to affirm that we respect and value the input of the expert advisory panel. Their insights are critical to the process, and in the majority of cases, we agree with their recommendations. However, as CMS, we must also weigh other factors, including budget impact, consistency with prior decisions, and public input.
  • The Importance of Data-Driven Decisions: In some cases, the advisory panel may focus solely on the clinical merits of a test, while CMS needs to consider data from MACs and public stakeholders about the resource usage and potential adoption rates of these tests. When the data is incomplete, particularly for high-cost tests, CMS often opts for gapfill to allow more time for evaluation.

6. Balance Between Innovation and Fiscal Responsibility

  • Supporting Innovation, Managing Costs: It’s crucial to convey that CMS supports innovation and wants Medicare beneficiaries to have access to cutting-edge diagnostics. However, we also have a responsibility to ensure that Medicare’s resources are used efficiently. High-cost tests have the potential to strain the Medicare budget if not carefully priced, which is why CMS sometimes takes a more conservative approach, particularly when there is uncertainty about the test’s long-term clinical utility.
  • High-Cost Tests Require More Scrutiny: The fact that CMS disagrees more frequently on high-cost tests isn’t about stifling innovation. It’s about making sure that we get the pricing right before committing taxpayer dollars. High-cost tests can significantly affect Medicare spending, and we need to balance clinical value with financial sustainability.

7. Acknowledge Stakeholder Concerns and Offer Solutions

  • Transparency and Communication: It’s important to acknowledge the stakeholders’ frustration with CMS’s decisions. While disagreements with the expert panel are sometimes necessary, we can commit to improving transparency around how and why CMS makes its decisions. Offering more detailed rationales for our choices, especially on high-cost tests, could help build understanding and trust.
  • Collaborating to Improve Alignment: We should be open to exploring ways to improve alignment with the advisory panel and public stakeholders, especially on high-cost tests. This could involve setting up additional working groups or consultation processes to ensure that stakeholder input is considered alongside CMS’s broader goals.

8. Frame the Meeting Outcome

  • Focus on Collaboration and Shared Goals: Ultimately, the meeting should emphasize that CMS, the advisory panel, and public stakeholders all share the same goal: ensuring that Medicare beneficiaries have access to the best diagnostic tools at sustainable prices. The difference in perspectives arises from the fact that CMS must consider the financial impact on Medicare as a whole.
  • Commitment to Listening: CMS is committed to improving its processes and working with stakeholders to find solutions. We should reiterate that we value their input and that the gapfill process is designed to allow flexibility and adjustment based on data, which is especially important for high-cost tests.

Key Points to Emphasize in the Meeting:

  • Disagreement with the Panel on High-Cost Codes Is About Caution, Not Dismissal: CMS’s higher rate of disagreement on high-cost tests is due to the greater potential budgetary impact and the need for more thorough evaluation through gapfill.
  • Gapfill Provides Flexibility: Gapfill allows CMS to gather data on resource use and refine pricing, especially for high-cost, complex tests where there is greater uncertainty.
  • Commitment to Improving Alignment: While disagreements are sometimes necessary, CMS is committed to improving transparency and collaboration with the expert panel and other stakeholders.

Conclusion

The central issue of disagreement with the advisory panel, particularly for high-cost tests, needs to be handled with transparency and clear communication about CMS’s broader fiscal responsibilities. By preparing to explain the reasoning behind these decisions—focusing on sustainability, consistency, and the need for data-driven pricing—we can demonstrate that CMS is acting in the best interests of both Medicare beneficiaries and the program’s long-term viability.



##
Tidbit
Two codes being priced this fall, 0421U and 0464U (Geneoscopy Colosense and Exact Science  Cologuard Plus, respectively) are both preventive tests under the Colon Screening NCD.   In the Hospital Outpatient Appendix B, updated quarterly, both have status E1, not payable for Medicare.  I believe this flags that CMS needs to update the relevant CRC screening NCD to include not only original Cologuard, but, specifically, Colosense and Cologuard Plus.   However, that's my guess based on Appendix B, and I'll leave the ultmiate question (whether an update is required) to attorneys.


Tuesday, September 24, 2024

Highlights: From the Newsletter Politico Pulse: HHS & AI

From the free email newsletter Politico Pulse:

https://www.politico.com/newsletters/politico-pulse/2024/09/13/inside-hhs-ai-purchases-00178902

ARTIFICIAL INTELLIGENCE AT HHS — 

HHS, one of the federal government’s top users of artificial intelligence, has spent nearly $129 million on AI and AI-related purchases in the past five years, Chelsea and POLITICO’s Mohar Chatterjee report.

That’s according to a POLITICO analysis of government data, which found that the top spenders within HHS include:

— The Office of the Assistant Secretary for Management and Budget at $53 million

— The NIH at $26 million

— The FDA at $32 million

Why it matters: AI experts who spoke with POLITICO said the investments, which accelerated before President Joe Biden’s October executive order directing federal agencies to be more intentional about how they use and regulate AI, aim to sort through massive datasets and advance research at the FDA and the NIH.

“This is encouraging,” said Dr. Don Rucker, who was HHS’ National Coordinator for Health IT in the Trump administration. As the health care industry works out what the proliferation of AI means for the sector, “HHS is sort of working out a version of the same question,” Rucker added.

A bar graph shows that FDA and NIH are among HHS' biggest AI purchasers.

Some ways HHS agencies use AI:

— To date, the OASMB purchased from AI company Spectral MD more than $39 million in optical imaging devices that use machine-learning algorithms to classify burns, which can be used to handle wounds in mass-casualty events.

— In 2020 and 2021, the Centers for Medicare and Medicaid Services awarded tech consulting firm ESimplicity nearly $4 million to plan how to efficiently transition its data to cloud services that could support AI use.

— Last year, the National Institutes of Health awarded a nearly $3.5 million contract to AI company BelleTorus for the experimental development of telehealth AI tools that could digitally assess a child’s throat and airways.

According to a GAO report from December, HHS is one of the top federal government agencies using AI — fourth only to NASA and the Commerce and Energy departments.

Just last month, HHS merged two tech offices and increased its focus on AI, posting job listings for three new roles related to creating AI policy: A chief AI officer, a chief technology officer and a chief data officer.

Brief Blog: PDL1 Sparks Fly Ahead of FDA Hearing

Update: Panel supported FDA plan for narrower labeling (votes 10-2, 11-1).   Here.

Friends of Cancer Research summarizes.

##

I posted a blog a few weeks ago that FDA had a pending AdComm on PDL1 and immuno-oncology (blog here).

The meeting is now imminent and Endpoints covers briefing materials from FDA and from BMS.

Endpoints

https://endpts.com/fda-adcomm-will-re-examine-approval-of-checkpoint-inhibitors-in-two-cancers-based-on-key-biomarker/

FDA (41pp)

https://www.fda.gov/media/182138/download

BMS (50pp)

https://www.fda.gov/media/182140/download

As quoted by Endpoints, FDA writes,

The agency said that “approvals for all randomized patients may not be in the best interest of patients with tumors with low PD-L1 expression” and that giving the drugs to patients with low or no PD-L1 expression “has the potential for harm including serious immune related adverse events on top of a malignancy that can markedly affect a patient’s quality of life.”

Opposite Issues for Her2 Drugs

Note that in other quarters, we are looking at the value of Her2 drugs for patients with low and "ultra low" Her2 expression, e.g. this month in ASCO Post.

##

Note that debates about PDL1 confusion and harmonization go back for years; e.g. Karim 2017 and Adam 2018.   

One response is to focus on better-defined cutoffs or reading methods for PDL1.  Entirely different approaches would attempt to make algorithms combining PDL1, TMB, MSI, or other biomarkers relevant to the checkpoint inhibitor pathways.  

##

AI Corner

  • Discussion by Chat GPT 4o
  • Mini-article by Scite AI

##

I asked Chat GPT 4 to read both the FDA and BMS documents and discuss.

##

Summary of the Documents

FDA Document  

This document outlines the FDA's review and considerations regarding the use of immune checkpoint inhibitors (ICIs) like nivolumab (Opdivo) and pembrolizumab (Keytruda) for the first-line treatment of HER2-negative metastatic or unresectable gastric adenocarcinoma. The FDA focuses on the relevance of PD-L1 expression as a biomarker for treatment efficacy, particularly in the context of selecting patients for ICI treatment.

Key points include:

  1. PD-L1 Expression: The FDA acknowledges the predictive role of PD-L1 expression but highlights the challenges of standardizing it due to varying assays and scoring methods across trials. The document emphasizes that higher PD-L1 expression (≥10 CPS) is associated with more significant benefit, whereas patients with lower expression (<1 CPS) may not derive much benefit and could experience harm due to immune-related adverse events (AEs).
  2. Labeling: FDA approvals for nivolumab and pembrolizumab are agnostic of PD-L1 status for now, but the agency is considering whether to amend labeling to include PD-L1 expression as a requirement for patient selection.
  3. Efficacy Data: The document cites trials that show significant survival benefits for patients with higher PD-L1 expression, although these benefits are less clear for patients with low or no expression.

BMS Document  

Bristol-Myers Squibb (BMS) presents its position on nivolumab (Opdivo) combined with chemotherapy for treating gastroesophageal adenocarcinoma, emphasizing the flexibility needed in prescribing decisions. BMS advocates that existing labeling provides sufficient information for clinicians to make informed decisions, including the potential benefits for PD-L1-positive patients and those whose PD-L1 status is unknown.

Key points include:

  1. PD-L1 Cutoffs: BMS argues against setting higher PD-L1 cutoffs (such as CPS ≥10), as this would exclude a significant proportion of patients (~40-50%) who might still benefit from the treatment. Instead, BMS prefers the flexibility of the CPS ≥1 cutoff to maximize patient access.
  2. Real-World Data: BMS points to real-world data indicating that about 73% of patients receiving nivolumab are tested for PD-L1, and most test positive (CPS ≥1). This suggests that PD-L1 testing is being used effectively to inform treatment decisions.
  3. Safety: The safety profile of nivolumab plus chemotherapy is well-understood, with manageable immune-related adverse events (mostly low-grade), and no new safety concerns have emerged.

Differences Between FDA and BMS Viewpoints

  1. PD-L1 Testing and Cutoffs:

    • FDA: The FDA is leaning toward revising labeling to include PD-L1 status as a criterion for treatment, especially for patients with higher expression (≥10 CPS). It raises concerns about administering ICIs to patients with low or no PD-L1 expression due to the potential for harm without clear efficacy.
    • BMS: BMS advocates for maintaining the existing labeling, which allows nivolumab use based on a CPS ≥1 cutoff. BMS argues that higher cutoffs would exclude too many patients and that current labeling offers sufficient flexibility for clinicians to decide on treatment, even for patients with low PD-L1 expression.
  2. Risk-Benefit Analysis:

    • FDA: The FDA's focus is on ensuring that only patients who are likely to benefit from ICIs (based on PD-L1 levels) receive treatment, to avoid unnecessary exposure to immune-related adverse events.
    • BMS: BMS emphasizes the importance of broad access to nivolumab, noting that even patients with lower PD-L1 levels or unknown status may benefit. They highlight that real-world data support the efficacy of the treatment in a broader patient population.
  3. Regulatory Perspective:

    • FDA: The FDA is considering harmonizing PD-L1 cutoffs across different trials and drugs to streamline patient selection and ensure consistency in treatment. This may lead to more restrictive labeling.
    • BMS: BMS argues for keeping the flexibility in the current labeling to ensure that clinicians can make case-by-case decisions, particularly given the variability in PD-L1 testing in real-world practice.

Implications for Oncologists

For oncologists, interpreting both FDA and BMS viewpoints involves navigating complex clinical and regulatory landscapes:

  1. Patient Selection: Oncologists must weigh FDA's potential move toward more restrictive labeling based on PD-L1 expression against BMS's more flexible approach. In practice, this means that oncologists need to carefully evaluate each patient's pathology report and PD-L1 status to decide whether ICIs are appropriate.

  2. PD-L1 Testing: With differing assays and cutoffs used in clinical trials, oncologists will need to ensure that the PD-L1 testing method used is appropriate and reliable. Differences in assay performance may affect patient selection for treatment, complicating decision-making.

  3. Safety Considerations: Oncologists must also consider the risk of immune-related adverse events, particularly in patients with low or no PD-L1 expression. They may have to balance the potential for benefit against the risk of harm, especially as FDA highlights that patients with low PD-L1 expression may not see significant survival benefits.

  4. Literature and Label Interpretation: Given the evolving regulatory landscape, oncologists must stay informed about updates to drug labeling and clinical guidelines. They may need to consult both FDA guidance and manufacturer recommendations (like BMS's stance) to ensure they are making evidence-based decisions for their patients.

In conclusion, oncologists must navigate differing interpretations of trial data and regulatory guidance, considering both the potential benefits and risks of ICIs based on individual patient profiles, particularly PD-L1 expression levels.

#####

I asked SCITE.AI to write an automatic mini-essay based on its review of recent PubMed hits on the topic of PDL1 harmonization.

##

Recent literature on the harmonization of PD-L1 testing for checkpoint inhibitor therapy reveals significant challenges and inconsistencies that impact clinical decision-making. The expression of PD-L1 is a critical biomarker for predicting the efficacy of immune checkpoint inhibitors (ICIs), particularly in cancers such as non-small cell lung cancer (NSCLC) and melanoma. However, the lack of standardized cutoffs for PD-L1 expression complicates its utility as a reliable predictive biomarker. For instance, while pembrolizumab is approved for patients with NSCLC showing PD-L1 expression greater than 50%, studies indicate that even within this group, the response rate can be less than optimal, suggesting that PD-L1 expression alone may not be sufficient to predict treatment outcomes (Maung et al., 2020; , Wang et al., 2020).

Moreover, the variability in PD-L1 testing methodologies further exacerbates the issue. Different assays and scoring systems can yield divergent results, leading to uncertainty regarding patient eligibility for ICI therapies. For example, Kaufman et al. emphasize the need for functional assays to assess PD-L1 binding capabilities, which may provide a more accurate prediction of therapeutic response than mere expression levels (Kaufman et al., 2023; , Kaufman, 2023). This highlights the critical need for harmonization in PD-L1 testing protocols to ensure consistent and reliable results across different laboratories and clinical settings.

 Additionally, recent studies have pointed out the unreliability of PD-L1 testing due to biological variability. Zou et al. found that PD-L1 expression levels can fluctuate based on tumor microenvironmental factors, which may not be captured in a single biopsy (Zou et al., 2021). This variability raises questions about the timing and method of PD-L1 assessment, suggesting that a more dynamic approach may be necessary to accurately gauge a tumor's immunogenicity and potential responsiveness to ICIs (Chen et al., 2021).

 The implications of these findings are profound, as they suggest that reliance on PD-L1 expression as a sole criterion for treatment decisions may lead to suboptimal patient outcomes. For instance, the study by Kim et al. indicates that while PD-L1 expression correlates with certain clinicopathological features, it does not consistently predict therapeutic efficacy across all cancer types (Kim et al., 2020). This inconsistency underscores the necessity for integrating additional biomarkers and clinical parameters to refine patient selection for ICI therapies.

 In conclusion, while PD-L1 remains a pivotal biomarker in the context of immunotherapy, the current landscape reveals significant challenges related to its testing and interpretation. The absence of standardized cutoffs and the variability in testing methodologies necessitate a concerted effort towards harmonization to enhance the predictive power of PD-L1 as a biomarker for checkpoint inhibitor therapy.

__________ 

References:

Chen, S., Zhang, Z., Zheng, X., Tao, H., Zhang, S., Ma, J., … & Hu, Y. (2021). Response efficacy of pd-1 and pd-l1 inhibitors in clinical trials: a systematic review and meta-analysis. Frontiers in Oncology, 11. https://doi.org/10.3389/fonc.2021.562315

Kaufman, B. (2023). Functional binding of pd1 ligands predicts response to anti-pd1 treatment in patients with cancer. Science Advances, 9(21). https://doi.org/10.1126/sciadv.adg2809

Kaufman, B., Abramov, O., Yevko, A., Apple, D., Shlapobersky, M., Greenshpan, Y., … & Porgador, A. (2023). Functional binding of pd1 ligands predicts response to anti-pd1 treatment in cancer patients.. https://doi.org/10.1101/2023.02.09.527671

Kim, J., Kim, K., Kim, M., Kim, Y., Suh, J., Jeong, H., … & Choi, H. (2020). Programmed death-ligand 1 expression and its correlation with clinicopathological parameters in gallbladder cancer. Journal of Pathology and Translational Medicine, 54(2), 154-164. https://doi.org/10.4132/jptm.2019.11.13

Maung, T., Ergin, H., Javed, M., Inga, E., & Khan, S. (2020). Immune checkpoint inhibitors in lung cancer: role of biomarkers and combination therapies. Cureus. https://doi.org/10.7759/cureus.8095

Wang, M., Wang, S., Trapani, J., & Neeson, P. (2020). Challenges of pd-l1 testing in non-small cell lung cancer and beyond. Journal of Thoracic Disease, 12(8), 4541-4548. https://doi.org/10.21037/jtd-2019-itm-010

Zou, Y., Chen, Z., Han, H., Ruan, S., Jin, L., Zhang, Y., … & Jin, H. (2021). Risk signature related to immunotherapy reaction of hepatocellular carcinoma based on the immune-related genes associated with cd8+ t cell infiltration. Frontiers in Molecular Biosciences, 8. https://doi.org/10.3389/fmolb.2021.602227

SCITE.AI Prompt.  Summarize recent papers on PDL1 harmonization for making decisions to use checkpoint inhibitor drugs. Focus on 2020 and newer papers. Note papers that discuss the lack of clear cutpoints for PDL1 expression or unreliability of PDL1 testing.


FDA's September 24 Webinar on Its "Labeling" Regulations for LDTs

By 2026, FDA expects LDT labs to comply with increasing amounts of medical device manufacturing regulations, in particular, those on "labeling."  (Note that labeling for complex PMA products, such as Illumina's new TSO CDx (PMA) 500-gene test, can be over 100 pages long, including instructions for use and other labeling).

FDA held a webinar on the topic on September 24, 2024:  

Here's the webpage, where you can access streaming links as well as the 55-page deck.

https://www.fda.gov/medical-devices/medical-devices-news-and-events/webinar-labeling-requirements-in-vitro-diagnostic-products-ivd-including-ldts-under-21-cfr-80910b#materials

FDA will post the video and transcript in about a week. 

  • FYI, in the meantime, an instant autotranscript is here.
  • FYI, in the meantime, a crude screen grab video is here.

##

Here's coverage from Shannon Bennett and from Christine Bump.

##

I've always understood that labeling, including intended use, can involve a complex back-and-forth with FDA near the end of the product development cycle, with the FDA ready to assert at any moment that your intended use conflicts with your data in some way.  Here, labs are asked to generate the labeling unilaterally, and submit it to the FDA, before FDA has seen or reviewed the LDT product.

##

Keep track of FDA's lengthy home page for FDA LDT issues:

https://www.fda.gov/medical-devices/in-vitro-diagnostics/laboratory-developed-tests


##

I've provided an AI summary of the FDA webinar, in the whimsical format of a memo to your boss who's head of regulatory at your LDT laboratory.  

Below that, some additional AI commentaries on the just-completed webinar.

##

AI Corner, 

Chat GPT 4o: Memo to Your Boss

MEMO: FDA's New Labeling Requirements for In Vitro Diagnostic (IVD) Products, Including Laboratory Developed Tests (LDTs)

To: [Boss's Name], Head of Regulatory at Big_LDT_Lab
From: [Your Name, Director]
Date: September 25, 2024
Subject: FDA Webinar on Labeling Requirements for LDTs and IVDs under 21 CFR 809.10(b)

Background:

On September 24, 2024, the FDA hosted a webinar focusing on the new labeling requirements for In Vitro Diagnostic (IVD) products, including Laboratory Developed Tests (LDTs), under 21 CFR 809.10(b). These regulations are part of the FDA's ongoing phaseout of its enforcement discretion policy for LDTs. By May 6, 2025, most LDTs must comply with the labeling requirements under 21 CFR 809.10(b). Full compliance is expected by May 2026.

The aim of these changes is to clarify labeling rules for LDTs, which have traditionally been less regulated than other IVDs. For a large diagnostic laboratory like ours, developing and using many proprietary LDTs, understanding and complying with these new regulations is essential for maintaining FDA compliance by the 2026 deadline.

Key Labeling Requirements under 21 CFR 809.10(b):

The FDA outlined the mandatory information that must accompany any IVD, including those offered as LDTs. This labeling includes:

  1. Product Name – The proprietary and established name (common or usual name).
  2. Intended Use – A clear description of the test's purpose and its procedure, specifying whether it is qualitative or quantitative.
  3. Summary and Explanation – A brief history of the test's methodology, referencing the literature and outlining the test’s merits and limitations.
  4. Principles of the Procedure – Explanation of the test's chemical, physical, or biological principles.
  5. Reagents and Instruments – Comprehensive details on the reagents and instruments used, including storage conditions, preparation, and any hazards.
  6. Specimen Collection and Preparation – Instructions for collecting, handling, and storing specimens, including precautions to maintain integrity.
  7. Procedure – A step-by-step description of the test process, from specimen reception to result reporting.
  8. Results Interpretation – Instructions on how to calculate and interpret results, including sample calculations and expected values.
  9. Limitations – Statements regarding known extrinsic factors or substances that could interfere with the results.
  10. Performance Characteristics – Detailed data on the test’s accuracy, precision, specificity, and sensitivity.

Impact for Our Lab:

  1. Documentation Expansion: All our LDTs will require detailed documentation covering the above aspects. While some of this information may already be part of existing internal protocols (e.g., test protocols and instrument manuals), they will need to be revised to comply with 21 CFR 809.10(b) standards. For example, specimen collection instructions must now include details on potential interfering substances, special handling, and patient preparation​.

  2. Compliance Timeline: As we approach the May 2026 compliance deadline, our regulatory team will need to prioritize tests that are frequently used and marketed. We may need to prepare labeling documents for submission as part of premarket reviews, particularly for high-risk or novel LDTs.

  3. Clarification of Exemptions: Certain LDTs are exempt from premarket review or labeling submission. This includes LDTs for unmet needs within integrated healthcare systems and some older LDTs not modified beyond their original scope. We must assess whether any of our tests fall under these exemptions and plan accordingly/ 

  4. Documenting Labeling Across Multiple Sources: Labeling for LDTs can be spread across multiple documents (e.g., test protocol, report template, test menu). This allows flexibility in how we structure compliance but also requires careful coordination to ensure completeness and clarity.

  5. Risk of Misbranding: Tests lacking appropriate labeling will be considered "misbranded" under FDA regulations. Misbranding can lead to enforcement actions, including fines or product recalls. Ensuring accurate, truthful, and non-misleading labeling is paramount to avoid regulatory issues.

Next Steps:

  1. Internal Review: We need to conduct a thorough internal audit of all existing LDTs to assess which ones require additional labeling or modification to meet FDA standards.

  2. Labeling Strategy: Develop a cross-departmental labeling compliance plan. This will include input from R&D, quality control, and regulatory teams to ensure all aspects of the 809.10(b) requirements are met across various LDTs.

  3. Training and Awareness: Train relevant teams on the new labeling requirements and the importance of consistent documentation across all platforms (test protocols, inserts, and manuals).

  4. Engagement with FDA: We should consider submitting questions to FDA and participating in the upcoming webinars on LDTs to stay updated on additional regulatory guidance, especially regarding labeling.

By staying proactive, we can ensure that our LDTs remain compliant with FDA regulations, avoiding any disruptions in service as the labeling requirements phase in.


If you need further details or assistance with specific compliance steps, please let me know.

[Your Name]
Regulatory Compliance Director


###

AI CORNER

Please summarize the webinar.

###

This FDA webinar, presented by Toby Lowe, Acting Deputy Director of the Office of Health Technology within the Center for Devices and Radiological Health (CDRH), focused on labeling requirements for in vitro diagnostic (IVD) products, including laboratory developed tests (LDTs), as specified under 21 CFR 809.10(b). The FDA is phasing out its enforcement discretion approach for LDTs, with full compliance required by May 6, 2026. The discussion emphasized the importance of accurate and comprehensive labeling to ensure the safety and effectiveness of IVDs, specifically test systems.

Overview of 21 CFR 809.10(b)

Labeling requirements under 21 CFR 809.10(b) are meant to ensure IVD manufacturers provide key information about their products, such as intended use, limitations, and performance characteristics. These requirements are a critical component of the regulatory framework for ensuring the safe and effective use of IVDs, which include LDTs. The webinar outlined how labeling encompasses more than just packaging inserts—it includes all written, printed, or graphic matter accompanying the device, including test protocols, test reports, and other documentation.

Key Labeling Components

The webinar outlined the core elements required in IVD labeling under 21 CFR 809.10(b):

  1. Proprietary and Established Names: The first requirement is to include both proprietary and established names of the product, which help identify the product and its manufacturer.
  2. Intended Use: The labeling must clearly state the purpose of the IVD, including the analyte being measured, the disease or condition it diagnoses, and the patient population. This ensures that users understand the scope of the product’s application.
  3. Summary and Explanation: Manufacturers must provide a brief description of the test’s methodology, as well as its merits and limitations. This helps users understand how the test fits within clinical contexts.
  4. Principles of Procedure: The underlying scientific mechanisms and the steps involved in conducting the test must be clearly explained.
  5. Specimen Handling: Proper collection, handling, and preparation of specimens are crucial for accurate test results. The labeling must specify storage and transportation requirements, as well as any additives or preservatives necessary to maintain sample integrity.
  6. Step-by-Step Procedures: Clear instructions on how to perform the test, from receiving the specimen to interpreting the results, must be provided to ensure the proper execution of the test.
  7. Performance Characteristics: The IVD’s accuracy, precision, specificity, and sensitivity must be outlined, along with any other relevant characteristics. This information allows users to assess the reliability of the test results.

Enforcement and Compliance

The FDA is particularly focused on the compliance of LDT manufacturers with these labeling requirements. Manufacturers of IVDs, including those offered as LDTs, are expected to submit labeling as part of their premarket submissions, where applicable. For LDTs not subject to premarket review, the FDA expects manufacturers to maintain compliant labeling and provide it during FDA inspections. Manufacturers using reagents or instruments not labeled for clinical diagnostic use must qualify those components under their own quality systems.

The FDA may take regulatory action if labeling is found to be false, misleading, or incomplete. This could include cases where an LDT lacks sufficient evidence to support its claimed performance characteristics. Additionally, the FDA emphasized that promotional materials, which fall under the definition of labeling, must also comply with these regulations.

Special Cases for LDTs

For LDTs approved under the New York State Clinical Laboratory Evaluation Program (CLEP) or those developed to meet unmet medical needs, the FDA will exercise enforcement discretion. However, manufacturers of these tests must still submit labeling information during the listing process, allowing the FDA to monitor the safety, effectiveness, and clinical validity of these products.

The labeling requirements outlined are intended to support consistent, safe, and effective use of IVDs in clinical settings. The FDA encourages manufacturers to review publicly available labeling examples on the FDA website, particularly those related to approved Premarket Approval (PMA) applications and humanitarian device exemptions, to guide compliance efforts.

Future Webinars

The webinar concluded with a reminder about an upcoming FDA session on the total product lifecycle approach to IVD regulation, scheduled for October 24, 2024. This session will delve deeper into how IVD products are regulated throughout their lifecycle, from development to post-market surveillance.


##

Please summarize the Q&A.

###

Here’s a summary of the Q&A section from the latter part of the FDA webinar:

Question 1:

What does it mean when the FDA requests labeling submission for certain IVDs offered as LDTs at the time of device listing?

  • Response: The FDA will request labeling information during the listing of IVDs offered as LDTs that fall within certain enforcement discretion policies, such as those approved by the New York State Clinical Laboratory Evaluation Program (CLEP) or for unmet needs. This request is intended to monitor the safety, analytical validity, and clinical validity of the IVDs, particularly for those marketed before May 6, 2024. The FDA will scrutinize any claims of superior performance and may take action if labeling is found misleading or if the test lacks safety and effectiveness.

Question 2:

Will enforcement discretion be applied to promotional materials?

  • Response: Promotional materials fall under the broader definition of labeling in the Federal Food, Drug, and Cosmetic Act. These materials must be truthful and non-misleading. The FDA will pay particular attention to performance claims and may take action if claims are not adequately substantiated. While there are some categories of LDTs (e.g., HLA tests for transplantation or forensic tests) that have broader enforcement discretion, promotional materials for most LDTs must comply with labeling requirements by May 6, 2026.

Question 3:

What are the FDA expectations regarding labeling for LDTs?

  • Response: FDA expects LDT manufacturers to comply with labeling requirements under 21 CFR 809.10(b) by May 6, 2026. This includes providing a complete and clear description of the intended use, limitations, and performance of the test. While a single document like a package insert is common for traditional IVDs, LDTs may use multiple documents (e.g., test protocols, report templates) to meet labeling requirements. The FDA requires a primary labeling document, often made public, while other detailed documents may remain confidential.

Question 4:

What are the labeling expectations for reagents and instruments used as part of an LDT but not manufactured by the laboratory?

  • Response: If a test system uses reagents or instruments already labeled for clinical diagnostic use, laboratories can reference this compliant labeling instead of duplicating it. However, if the reagents or instruments are not labeled for clinical use (e.g., marked as Research Use Only), the laboratory must qualify these components under their own quality system and ensure that the appropriate labeling is provided for the LDT. This might require multiple documents, including a primary labeling document and internal test protocols.

Question 5:

What are the labeling expectations around performance characteristics, and how does that differ from what is expected in a premarket submission?

  • Response: The labeling for IVDs, including LDTs, must include performance characteristics such as accuracy, precision, sensitivity, and specificity, per 21 CFR 809.10(b)(12). These characteristics must be summarized in the labeling. In contrast, premarket submissions require more detailed validation data to substantiate these performance claims. Labeling provides a high-level summary, while premarket submissions include the complete validation data necessary for FDA review.

Closing Remarks:

Toby Lowe emphasized that manufacturers should ensure their labeling aligns with FDA’s requirements, and additional resources, including publicly available labeling examples, are available to assist in compliance. The FDA's next webinar will cover the total product lifecycle approach to IVD regulation.

##

Will LDT stakeholders find some parts of this unsettling,
since they are used to only CLIA regulation?

##

Members of the lab-developed test (LDT) community who are primarily familiar with CLIA regulation are likely to find several parts of the webinar unsettling due to the additional burdens, costs, and uncertainties that FDA regulation introduces. The following aspects of the webinar may cause significant concern:

1. FDA Labeling Requirements for LDTs

  • What May Be Upsetting: The concept of "FDA labeling" is new and may seem strange to labs used to CLIA's focus on laboratory performance and validation rather than detailed labeling for external use. The FDA requires that LDTs comply with 21 CFR 809.10(b), which involves extensive requirements for labeling that go beyond internal documents. This includes defining intended use, performance characteristics, specimen handling instructions, and much more in a public-facing document, a shift from the internal-only protocols commonly followed under CLIA.
  • Impact: Labs are not used to developing public-facing materials that must meet specific FDA requirements, and creating, maintaining, and potentially revising this labeling will add complexity and cost. Furthermore, the multi-document approach for LDTs, which may involve both a primary labeling document and supporting confidential documents, adds a new layer of documentation management.

2. Premarket Submission and FDA Oversight of LDTs

  • What May Be Upsetting: The need for FDA premarket review for certain LDTs, especially when previously CLIA was the primary regulator, introduces significant uncertainty. LDTs that were exempt from FDA premarket review under the agency's historical enforcement discretion may now need to comply with FDA regulations, including labeling submissions, performance validation, and adherence to safety and effectiveness standards.
  • Impact: The submission of materials to the FDA for premarket review is an unfamiliar process for many LDT laboratories. This could result in significant costs, delays, and concerns over whether their LDTs will be approved or if they will need to modify their processes. Many LDTs, especially those developed for rare diseases or niche applications, may face regulatory challenges that they didn’t previously encounter under CLIA.

3. FDA Review of Promotional Materials

  • What May Be Upsetting: FDA's regulation of advertising and promotional materials for LDTs is likely to be seen as an onerous, unnecessary extension of oversight. Under CLIA, labs did not have to worry about how they marketed their tests or whether advertising claims were “misleading.” Now, the FDA expects compliance with labeling requirements that include promotional materials, meaning any claims in advertisements must be substantiated and truthful, subject to FDA review.
  • Impact: Many LDT labs have never had to navigate the complexities of advertising compliance under FDA standards, which are strict and can lead to enforcement actions if violated. This introduces additional legal and marketing costs, along with uncertainty over whether existing marketing materials will need to be revised. Laboratories may find themselves scrambling to adjust or completely overhaul their promotional approaches.

4. Costs of Compliance with FDA Regulations

  • What May Be Upsetting: Compliance with FDA regulations brings significant new costs that many labs are unaccustomed to under CLIA. These costs include not only the creation of FDA-compliant labeling and promotional materials but also the possibility of needing to conduct new validation studies for performance characteristics, hiring legal and regulatory experts, and managing premarket submissions.
  • Impact: LDTs are often developed by smaller labs with limited budgets. The costs associated with meeting FDA regulatory requirements may be prohibitive for some, potentially stifling innovation or even leading to the discontinuation of certain LDTs that are not deemed financially viable under the new regulatory framework. The uncertainty around how much these changes will cost and how long the approval processes might take adds to the stress.

5. Post-Market Surveillance and FDA Inspections

  • What May Be Upsetting: Under FDA regulation, LDT manufacturers are expected to maintain compliance with labeling and safety standards over time, including during FDA inspections. The prospect of FDA inspections is far more rigorous and intimidating than those from CLIA, and the FDA has the authority to enforce labeling compliance and post-market surveillance.
  • Impact: Laboratories accustomed to more lenient oversight may fear the potential for FDA audits and the threat of regulatory action if they fail to meet the FDA’s labeling, safety, and effectiveness standards. This adds a layer of ongoing regulatory pressure that many labs have not had to consider under CLIA.

6. Burden of Meeting FDA's Performance Characteristic Requirements

  • What May Be Upsetting: CLIA focuses on test accuracy, but the FDA goes further by requiring detailed information about test performance characteristics such as sensitivity, specificity, and precision to be included in the labeling. The FDA also reviews these characteristics during premarket submissions.
  • Impact: Laboratories may have to conduct additional testing or validation studies to meet FDA's stricter requirements. This could be costly and time-consuming, especially for smaller labs that do not have the infrastructure in place to handle these rigorous standards.

7. Impact on Innovation and Flexibility

  • What May Be Upsetting: The flexibility that labs currently enjoy under CLIA to quickly develop and modify tests to meet clinical needs is likely to be reduced under FDA regulation. With more stringent requirements for labeling, validation, and premarket approval, labs may find it more difficult to innovate and adapt quickly to emerging clinical challenges.
  • Impact: Many in the LDT community pride themselves on their ability to rapidly create custom tests for specialized clinical needs. The FDA's regulatory framework could slow down this process, requiring more formal procedures, reducing the agility of the laboratories, and increasing the lead time for new tests to reach the market.

Conclusion

The FDA’s regulatory framework, particularly the labeling requirements, review of promotional materials, and stricter oversight, represents a significant shift for the LDT community. Labs that have operated successfully under CLIA may be upset by the financial burden, the additional layers of compliance, and the uncertainty around how these new requirements will affect their operations and test offerings. The perception that FDA regulation complicates innovation, adds cost, and introduces regulatory uncertainty is likely to be a central concern for many LDT developers.