Friday, February 3, 2023

Lessons from OPPS Rule: Downgrading Payment for a Pathology Test

Here's a brief case study on how hospital outpatient payment is different from "physician fee schedule" payment, and how sudden changes can occur in the former.

Way back in 2011, AMA CPT created a new FISH code 88121 specifically for urine FISH with 3-5 biomarkers.   (Prior to this, FISH from any source was paid in multiples of the number of markers, and CPT felt that urine FISH with 5 biomarkers should be priced as a single service.  This resulted in a significant price cut, something like 50%, for applicable services).

In the non-facility setting, a pathology code is priced by RVU's, physician materials and work units.  In the facility setting, the technical component of the code is priced by administrative assignment of the code to an "APC," an ambulatory payment category.  This results in one price for any of a basket of services in that APC.  

OK, so 88121 has been paid as 5673 "Level III Pathology' for $333.  CMS found the median hospital charges calculate to $175.  So CMS re-assigned 88121 down to 5672 "Level II Pathology" at $162.

This was a proposed rule in July 2022 and finalized in November 2022.   

CMS received a number of protest letters, arguing that the APC Level II price was too low.  CMS responded patiently that it prices hospital services by taking the geometric mean of claims charges (using a chart-to-cost deflation value) and that gave median hospital values of $175 which fits an APC whose payment is $162.

The moral is, a successful and high enough APC assignment for a new code can decay into a lower APC reassignment based on CMS's annual review of incoming hospital claims and charges. 

I've put the CMS discussion (87 FR 71871) in a cloud PDF here.


Thursday, February 2, 2023

Very Brief Blog: Dr. Karen Nakano Retires from CMS

Many in the lab community have appreciated the hard work of Dr. Karen Nakano as one of the CMS senior policy staff responsible for laboratory payment issues.   It's announced she reached retirement from CMS, at the end of January 2023.

Brief Blog: NGS MAC Updates Molecular Pathology LCD: Oddities and Errors

On February 2, 2022, the NGS MAC released a proposed update of its broad molecular services LCD, L35000.  Here.

First, the updates are narrow and involve a few codes for cytogenetics.  However, I noticed a few other aspects of the LCD.

Odd "Issue Description"

First, LCDs online at CMS are now required to post an ISSUE DESCRIPTION for each new LCD or revision.   However, for L35000, NGS MAC has simply filled this box in by using some old cut and paste boilerplate tht "LCDs address circumstances under which an item may be reasonable and necessary."  Surely CMS intended this mandatory box "Issue Description" to orient the reader to the new LCD, not merely to fill with unchanging boilerplate that LCDs define covered services.   

Request Letter Posted

Second, the request came from Mayo Clinic in October 2022, so the lag to the draft update was just 3 1/2 months.   (I've seen other requests lag a year or more).

Third, the request letter is posted online, it's brief at two pages, but readers can reference it for style and content.  


In the "Proposed Process Information / Synopsis of Changes" NGS MAC notes that IGH and TP53 genes will be covered, and that this matches to NCCN guidance.

In a "Document Note," NGS MAC notes that only IGH and TP53 gene coverage in CLL is open for comment.

LCD Structure is Unusual

The LCD has a lengthy section with typically one-sentence coverage statements, and an even longer section of non covered genes, the latter showing the gene name only and no remarks.

Although the LCD deals with a very large numbers of genes, the required "Summary of Evidence" and "Analysis of Evidence" sections are very brief and only deal tersely with a couple topics.

LCD Has Major Errors

The LCD has a section that even 5-50 gene panels are not covered in cancer (except 5-50 genes in lung cancer.)  The LCD then refers to LCD L36376.  

However, L36376 was deleted four years ago (2019).  In fact, they have coverage for 5-50 gene and also 51+ gene panels in their LCD L37810.   

I submitted a comment that this section of LCD L3500 was out of date, out of sync, and confusing.  


Readers may note that the coverage here for PGx cods (CYP2C6 etc) is exceedingly narrow, far narrower than PGx LCDs at MolDx and Novitas/FCSO MACs.

Wednesday, February 1, 2023

CMS Gears Up for Feb 13/14 MedCAC on CED: Download Background, Speakers, Presentations

Back in September 2022, CMS announced a MedCAC (public panel workshop) on Coverage with Evidence Development.  Back in November 2022 they pushed the date from December 2022 to February 2023.  Now, they've posted details, agenda, panelists, speakers,  and background materials.

Although we don't read the word "TCET" on the page, the meeting is widely viewed as part of the Biden administrations to boot up a program or policy package called "Transitional Coverage for Emerging Technologies."   This has been proposed to develop and be defined, as a replacement for "MCIT" Medicare Coverage for Innovative Technologies under the Trump administration.  The MCIT, which was never put in effect, linked coverage to FDA breakthrough status.

Find it all here:

See a wide range of materials:

  • Registration link (your email)
  • Roster
  • Questions
  • Agenda
  • Speakers
  • Written public comments
  • Background materials (incl. tables for AHRQ review)
  • Presentations [includes 58 p AHRQ deck]
  • AHRQ review document "Requirements for Coverage with CED"
  • Original Federal Register notice
Some of the above resources like "Agenda" are simply downward extensions of the webpage.

The agenda includes a summary of the AHRQ question review by Jodi Segal MD of Johns Hopkins.  Public comments run from 11:30 to 12:50.  There are 14 speakers, such as Medtronic, AdvaMed, American College of Cardiology, MITA, SNM, and others.  There are a few minutes for open mic public comments after lunch, then the panel discusses the key questions.

Panelists are asked to score the importance of the following aspects of CED:
  • Sponsor
  • Communication
  • Governance
  • Context ("Rationale for study is well supported," "Study is not unnecessarily duplicative," etc.)
  • Protocol ("registered with Clinical")
  • Population
  • Generalizable
  • Data quality
  • Data use
  • Design - Subpopulations ("Must discuss subpopulations...")
  • Reproducibility (e.g. analysis is robust to alternative analyses)
  • Reporting
  • Sharing
  • Legal (e.g. medical care, not simply testing toxicity in health individuals)
The panel, chaired by Joseph Ross MD of Yale, includes guest members Dr. CaƱos of CDRH, Dr. Umscheid of AHRQ, and Dr. Hodes, NIA/NIH.

Part of the AHRQ pre-work was to have 9 experts rate about 20 bullet points as "2" (essential) "1" (important) or "0".   See the webpage link "background materials" or click to enlarge the figure below.  4 were rated "2" by all reviewers, and 3 more were rated "2" by almost all reviewers.

AHRQ / Click to enlarge / Background materials

Tuesday, January 31, 2023

Brief Humor: ChatGPT Explains How MedTech Can Work Better with Payers

The AI service ChatGPT answers some questions about medtech and payers.

In the United States, health insurers are often reluctant to pay for new technologies. Explain why this is the case, and please give a detailed answer. Then, explain some ways that medical technology companies can achieve more favorable decisions from the health insurers.

Health insurers are often reluctant to pay for new technologies because of a combination of factors, including:

  1. Cost concerns: New medical technologies tend to be expensive, and health insurers are concerned about the impact on their bottom line. They want to ensure that the technology is cost-effective and provides value for money before they agree to cover it.
  2. Uncertainty around efficacy: There is often a lack of long-term data on the effectiveness of new technologies, and health insurers want to be sure that the technology actually works before they agree to pay for it.
  3. Limited evidence of clinical effectiveness: New technologies may not have been fully tested or validated in a clinical setting, and health insurers want to be sure that the technology is safe and effective before they agree to cover it.

To achieve more favorable decisions from health insurers, medical technology companies can:

  1. Conduct rigorous clinical trials: By providing robust evidence of the safety and efficacy of the technology, medical technology companies can increase the likelihood of health insurers covering it.
  2. Demonstrate cost-effectiveness: By demonstrating that the technology provides value for money and is cost-effective, medical technology companies can make a stronger case for coverage.
  3. Build partnerships with health insurers: By working closely with health insurers, medical technology companies can better understand their concerns and make a stronger case for coverage.
  4. Engage with patient advocacy groups: By engaging with patient advocacy groups, medical technology companies can increase public awareness and understanding of the technology, which can increase pressure on health insurers to cover it.

Please explain more with details and specific examples how "build partnerships with insurers" can be achieved.

Building partnerships with health insurers can be a powerful way for medical technology companies to achieve more favorable decisions on coverage. Here are some specific ways that medical technology companies can build partnerships with health insurers:

  1. Engage in open and transparent communication: Medical technology companies should be open and transparent with health insurers about their technology, its capabilities, and any potential limitations. This can help build trust and understanding between the two parties, making it more likely that health insurers will consider covering the technology.
  2. Demonstrate the value of the technology: Medical technology companies should work to demonstrate the value of their technology to health insurers. This may include providing data on the clinical outcomes and cost savings associated with the technology. For example, a medical device company could provide data showing that their device results in fewer complications and shorter hospital stays compared to traditional treatments, reducing overall healthcare costs.
  3. Offer flexible payment models: Medical technology companies can work with health insurers to develop flexible payment models that are mutually beneficial. For example, a company might offer to provide the technology for a lower up-front cost in exchange for a share of the cost savings generated by the technology. This can help health insurers feel more comfortable covering the technology, as they are able to manage their costs more effectively.
  4. Participate in pilot programs: Medical technology companies can work with health insurers to participate in pilot programs. These programs allow the technology to be tested in a real-world setting, providing valuable data on its effectiveness and safety. This can help build confidence in the technology among health insurers and make it more likely that they will cover it.
  5. Collaborate on patient education and outreach: Medical technology companies can work with health insurers to educate patients and the general public about the benefits of the technology. This can help increase demand for the technology, making it more likely that health insurers will cover it.

Building partnerships with health insurers is a long-term process that requires patience, persistence, and strong communication skills. However, by taking these steps, medical technology companies can increase the likelihood that their technology will be covered and make a positive impact on patient care.

Ai pic from Canva.

Health technology assessments, also known as HTA's, are often highly critical of new medical technologies.   Explain why this is so, and offer paths to improvement for the medical technology companies.  You [the AI] are an expert in the field, and will include common reasons, but also provide an emphasis on less common and more insightful aspects of the issues.

Health technology assessments (HTAs) are often critical of new medical technologies for several reasons:

  1. Lack of evidence: HTAs often require robust evidence of the safety and efficacy of new technologies. If a technology has not been extensively tested or has limited data on its effectiveness, HTAs may be more skeptical of its value.
  2. Cost concerns: HTAs are often concerned about the cost of new technologies and whether they provide value for money. They may require data on the cost-effectiveness of the technology and whether it is cost-competitive with existing treatments.
  3. Ethical concerns: HTAs may consider ethical issues related to new technologies, such as the impact on patient autonomy or the distribution of healthcare resources.
  4. Limited long-term data: HTAs may be concerned about the long-term effects of new technologies, especially if there is limited data on the technology's safety and efficacy beyond the short term.

To improve the chances of a more favorable outcome in HTA evaluations, medical technology companies can:

  1. Conduct rigorous clinical trials: By providing robust evidence of the safety and efficacy of the technology, medical technology companies can increase the likelihood of a positive HTA evaluation.
  2. Address cost concerns: Medical technology companies can work to demonstrate the cost-effectiveness of their technology and provide data on the cost savings it generates.
  3. Address ethical concerns: Medical technology companies can be proactive in addressing ethical concerns related to their technology, such as by providing data on patient autonomy or the equitable distribution of healthcare resources.
  4. Engage in open and transparent communication: Medical technology companies can be open and transparent with HTAs about their technology and its capabilities, addressing any concerns that HTAs may have.
  5. Collaborate with stakeholders: Medical technology companies can work with stakeholders, such as patient advocacy groups, healthcare providers, and payers, to build a broad coalition of support for their technology.

In addition to these common approaches, medical technology companies can also consider less common and more insightful paths to improvement. For example:

  1. Consider alternative evaluation methods: Medical technology companies can consider alternative evaluation methods, such as real-world evidence or patient-reported outcomes, that can provide a more comprehensive picture of the technology's impact.
  2. Engage with HTAs early in the development process: Medical technology companies can engage with HTAs early in the development process to understand the criteria that HTAs use to evaluate technologies and to ensure that their technology meets these criteria.
  3. Foster a culture of innovation: Medical technology companies can foster a culture of innovation that encourages continuous improvement and the development of new technologies that address unmet medical needs.

Overall, to improve the chances of a favorable outcome in HTA evaluations, medical technology companies should be proactive, engage in open and transparent communication, and work to build broad support for their technology.

Bonus:  ChatGTP promises more "factuality" on January 31, 2023.
Click to enlarge.

Use of the word "factuality" 1800-2019:

Google NGram

Wednesday, January 25, 2023

MolDx Updates Several Coverage Assessment Checklists

The MolDx program uses a number of checklists and Excel documents to solicit and organize information for test review.   Some of the documents are general applicability, others are tied to specific LCDs (e.g. pharamacogenetics).  See also an important overview of FAQ for the tech assessment documents - here.  And there is also a "MolDx Manual," though not updated since 2019 (16pp, here.)

Find the listing here:

Updated documents include:

  • Tech Assessment Submission Checklist 1/18/2023
  • Tech Assessment Summary Form 1/18/2023
  • Organ Graft Rejection 1/18/2023

MolDx Publishes Transcripts from November Transplant Meetings (Heart/Lung and Kidney/Liver)

Last November, MoLDx held full-day public meetings regarding evidence for use of novel molecular tests in transplant rejection, one day for heart/lung and one day for kidney/liver.

Transcripts and video links are now posted.



Of note, on Jan 18, 2023, MolDx also updated its list of requirements for test review in this clinical area, see here.  Form ALLO-CQ-003

Friday, January 20, 2023

Brief Humor: ChatGPT Explains Strength of Evidence, and Specificity, Sensitivity

The currently-open-access AI service, CHATGPT, was asked about hierarchy of evidence, other factors for clinical study validity, and sensitivity-specificity.


In evidence-based medicine, what is the hierarchy of evidence?

In evidence-based medicine, the hierarchy of evidence is a framework used to rank the strength of different types of medical evidence. The ranking is based on the quality and design of the study, with the highest level of evidence being considered the most reliable. The general hierarchy of evidence is as follows:

  1. Systematic reviews and meta-analyses of randomized controlled trials (RCTs)
  2. Individual RCTs with a low risk of bias
  3. Non-randomized controlled trials, such as cohort and case-control studies
  4. Case-series and poor quality observational studies
  5. Expert opinion and anecdotal evidence.

It's important to note that this is a general guide and other factors such as study size, generalizability, and external validity may also play a role in determining the strength of the evidence.

Can you explain more about "other factors" that may impact the validity of clinical trial evidence?

There are several factors that can impact the validity of clinical trial evidence and should be considered when evaluating the strength of the evidence. These include:

  1. Study design: The design of the study can affect the validity of the evidence. For example, randomized controlled trials (RCTs) are considered to have a higher level of evidence than observational studies because they reduce the potential for bias.
  2. Sample size: Larger sample sizes increase the power of the study and reduce the potential for random error. This can increase the validity of the evidence.
  3. External validity: External validity refers to the extent to which the results of a study can be generalized to other populations. Studies that have a high degree of external validity are considered to have stronger evidence.
  4. Confounding: Confounding occurs when an unmeasured variable affects the relationship between the exposure and outcome being studied. This can bias the results of the study and reduce the validity of the evidence.
  5. Bias: Bias can occur in any study, and it can be introduced in various ways, such as through selection bias, measurement bias, or analysis bias. Bias can lead to inaccurate results and reduce the validity of the evidence.
  6. Quality of study: Quality of study is an overall assessment of the study design, conduct, and reporting, as well as whether the study has avoided or addressed potential sources of bias. A well-conducted study with a high quality of study can increase the validity of the evidence.

It's important to keep in mind that these factors should be considered in the context of the overall body of evidence, rather than in isolation, and the results of a single study should not be taken as definitive.


In clinical trials for laboratory tests, is specificity or sensitivity more important?

In clinical trials for laboratory tests, both specificity and sensitivity are important, but the relative importance of each can depend on the specific context and the disease or condition being tested for.

Sensitivity refers to the proportion of true positive results among all true positive and false negative results. In other words, it measures the ability of a test to correctly identify individuals who have the disease or condition. A test with high sensitivity means that there are few false negatives, and the test is able to detect the disease or condition in most individuals who have it.

Specificity, on the other hand, refers to the proportion of true negative results among all true negative and false positive results. It measures the ability of a test to correctly identify individuals who do not have the disease or condition. A test with high specificity means that there are few false positives, and the test is able to correctly identify most individuals who do not have the disease or condition.

In general, a test with high sensitivity is preferred when the disease or condition is serious or life-threatening, and when the cost of a false negative result is high. This is because a test with high sensitivity will detect most cases of the disease or condition, even if it also produces some false positives.

On the other hand, a test with high specificity is preferred when the disease or condition is relatively benign, and when the cost of a false positive result is high. This is because a test with high specificity will minimize the number of false positives, and only identify individuals who are truly positive for the disease or condition.

In practice, a test with a good balance of sensitivity and specificity is preferred as a trade-off between these two factors is often required. The cutoff for positivity may be adjusted to optimize the test performance and balance sensitivity and specificity.

Design a short course teaching the statistical methods for sensitivity and specificity in clinical laboratory tests.

Here is a short course on statistical methods for sensitivity and specificity in clinical laboratory tests:

  1. Introduction: Introduce the concepts of sensitivity and specificity and their importance in evaluating the performance of clinical laboratory tests.
  2. Sensitivity and Specificity: Define sensitivity and specificity, and explain how they are calculated using true positive, true negative, false positive, and false negative results.
  3. Receiver Operating Characteristic (ROC) Curves: Explain how ROC curves can be used to visualize the trade-off between sensitivity and specificity for a given test.
  4. ROC AUC: Explain how the area under the ROC curve (AUC) can be used to compare the performance of different tests.
  5. Positive Predictive Value (PPV) and Negative Predictive Value (NPV): Explain how PPV and NPV can be used to evaluate the performance of a test in a specific population.
  6. Likelihood Ratios (LRs): Explain how LRs can be used to evaluate the diagnostic performance of a test.
  7. Cutoff Points: Explain how cutoff points can be used to optimize the balance between sensitivity and specificity, and how different cutoff points can affect the test performance.
  8. Conclusion: Summarize the main points of the course and discuss the importance of understanding and evaluating the performance of clinical laboratory tests.

Throughout the course, it would be beneficial to provide examples and exercises to give students hands-on experience with the statistical methods discussed, and to encourage them to critically evaluate the performance of laboratory tests they encounter in their own practice.

In statistics for clinical laboratory tests, are sensitivity and specificity still important, or have they been replaced by newer methods and statistics?

Sensitivity and specificity are still important statistics in evaluating the performance of clinical laboratory tests. They are basic measures of a test's accuracy and are used to determine the proportion of true positive and true negative results, respectively. These measures help to identify how well a test can detect the presence or absence of a disease or condition and are key characteristics of a diagnostic test.

In addition to sensitivity and specificity, other statistics have been developed to evaluate the performance of clinical laboratory tests. For example, positive predictive value (PPV) and negative predictive value (NPV) are used to evaluate the performance of a test in a specific population, while the likelihood ratios (LRs) are used to evaluate the diagnostic performance of a test. Receiver Operating Characteristic (ROC) curves and the area under the ROC curve (AUC) are also commonly used to visualize and compare the trade-off between sensitivity and specificity for a given test.

It's important to note that different tests and different clinical scenarios may require the use of different statistics, and that the choice of the appropriate statistical method should be based on the specific research question, the type of test, and the population under study. Additionally, sensitivity, specificity, PPV, and NPV are population dependent measures and might be affected by the prevalence of the disease or condition in the population.

In conclusion, sensitivity and specificity are still important statistics in evaluating the performance of clinical laboratory tests and are commonly used in combination with other statistics to provide a more comprehensive assessment of test performance.

When evaluating clinical laboratory tests, when should you consider using nonparametric statistics?

Nonparametric statistics are a set of statistical methods that do not rely on assumptions about the underlying probability distributions of the data. These methods are particularly useful when the data do not meet the assumptions of parametric methods, such as normality. Nonparametric statistics can be used to evaluate clinical laboratory tests in situations where:

  1. The data is not normally distributed: Nonparametric methods can be used when the data is not normally distributed, such as when the data is skewed, has outliers, or is ordinal.
  2. Small sample sizes: Nonparametric methods are often more powerful and less sensitive to small sample sizes than parametric methods.
  3. The distribution of the test results is unknown or uncertain: Nonparametric methods can be used when it is not clear what the underlying distribution of the test results is, or when the distribution is not known.
  4. The data is continuous or ordinal: Nonparametric methods can be used when the data is continuous or ordinal, rather than categorical or discrete.
  5. The data is not interval-scaled: Nonparametric methods are suitable for ordinal, interval or ratio-scaled data, but not for nominal-scaled data.
  6. The data does not meet the assumptions of parametric methods: Nonparametric methods do not rely on assumptions about the underlying probability distributions of the data, so they can be used when the data does not meet the assumptions of parametric methods.

It's worth noting that nonparametric methods are not always the best choice, and the choice of statistical method should be based on the specific research question and the characteristics of the data. Additionally, it is important to consider the trade-offs between nonparametric and parametric methods, as nonparametric methods may not always be as powerful and may require larger sample sizes to achieve similar levels of precision.


Nonmedical Bonus.  ChatGTP can also generate short movie scripts.

A New White Paper: Liquid Biopsy for Comprehensive Profiling, Reimbursement and Coding

NEW 2023 White Paper on Liquid Biopsy CLICK HERE.


Background - My Spring 2022 White paper

Last spring, in March 2022, I had the chance to write a white paper about comprehensive genomic profiling, and why it should be the standard of care for a range of cancer patients.  I focused on the added value of tumor mutation burden and MSI testing, both highly relevant to checkpoint inhibitor therapy decisions.

New Trends During 2022

Over the year since that publication, there has been more and more interest in plasma biopsy for comprehensive genomic profiling.   The PLA rapid coding system has expanded to include a number of codes for plasma based CGP.   Earlier this winter, there was an important and detailed paper about the economics of plasma-based tumor genomics (Kramer et al.)   However, the coding system seems to be lagging.   New Category I codes have been introduced for RNA-specific CGP, but not yet for plasma-based CGP.   


My new white paper provides an update with recent publications on plasma-based CGP, including health economics.  I also review the status of PLA coding and Category I coding in some detail.   I argue it's time for bringing the coding system up-to-date.

Find the new liquid biopsy white paper in the cloud here.

Thursday, January 19, 2023

Very Brief Blog: CGS MAC Posts Revision of "Histochemistry" LCD

 In new-and-revised LCD news, the CGS MAC has posted a revised LCD for histochemistry, for public comment.  

The revision, posted January 19, 2023, is based on a reconsideration letter from CAP in January 2022, which gives a benchmark for timelines.   (Earlier I posted an LCD revision request from Castle Bioscience which was dated about a year earlier than the corresponding revision.)  

The five-page CAP letter is here.   The proposed LCD should be here, DL35986.  Comment to March 5, 2023.

The LCD revised in response to the CAP letter, has no redline function, so you can't tell what they are, or are not, proposing to revise.  I made a redline of the current and proposed LCDs, and posted it in the cloud here.   

These multiple changes are briefly summarized as "synopsis of changes," stating, "The policy was updated to reflect new evidence regarding PharmDx Ki-67 test,  Lynch screening and prostate updates, Ki-67/MIB-1 for GI and thoracic sections, and IHC updates."


I believe this LCD was originally a MolDx work product and active at the four MolDx MACs.   However, MolDx MACs now collaborate only on DNA/RNA tests, so the histochemical LCD would no longer need to be in sync amonst the MolDx MACs.   

Medicare Updates its Deadly "Statistical Extrapolation" Rules for Recoupments

For many decades, Medicare has used a statistical sampling and extrapolation method to impose large recoupments on providers.  For a hypothetical example, if a provider had 10,000 claims for $1M dollars, Medicare might sample 30 claims, find a 50% error rate, and seek a recoupment for 50% of that year's claims ($500,000).   

There is a line in statute that extrapolation can only be used when error rates are high.  However, I was involved in a case once where the initial rate was  high (say, 50%) and the final error rate was low (5%) yet CMS pursued extrapolation because the initial (and no matter how wrong) error rate was high.  I found this appalling.  The law is at SSA 1893(e)(3).

I was also appalled in a case where CMS calculated a 90% confidence band using the simple standard deviation method - but standard deviation are parametric statistics (based on a "standard curve" or "Gaussian curve") and the data was even remotely normal (parametric).  This update seems to at least take account of this problem - noting that in some data sets it may not be possible to use normal methods "due to the use of theoretical assumptions underlying standard statistical methods."  

But the wording is funky - such principles are not "theoretical assumptions" (sic) but rather "mathematical principles."  For example, Pi = circumference / r2, is not a theoretical assumption about pi, but the definition of pi.

CMS also admits that some methods could result in an "overpayment debt" that is more than originally paid - which makes visible the nonsense created by poor statistical methods.

Now in January 2023, CMS has released a heavily updated version of its statistical investigation guide, which is "Program Integrity Manual, Chapter 8".   Find the PDF here:

I haven't done extra research, but I noted a law firm report of certain extrapolation changes made in 2018/2019 here.  Medicare Compliance (trade journal) has an open access issue on CMS changes also, here.

A 2020 court case upheld (as had many others) extrapolation, here, Palm Valley v Azar, appeals court. In the case, Palmetto GBA MAC sampled 54 claims of 10,699.  The extrapolation was initially on 29 of 54 claims but some were won on appeal, leaving 25 or 59 claims.  The $81K debt was multiplied to $12.6M.

See Q&A via AI at ChatGTP here.

Very Brief Blog: Gapfill Process for 2023 Starts at CMS

In December 2022, CMS released CR13023, annual update for lab fees and policies for 2023.  Find it here.

66 codes appear to be marked for "Gapfill" in this spreadsheet.  (For other codes, crosswalks are stated).

In the gapfill process, MACs are expected to gather information germane to pricing in the first quarter, and report to CMS in the second quarter.  CMS posts these prices typically around May or June, for 60 days public comment.  The public comments, if any, are relayed from CMS to the MACs which reflect final prices back to CMS in early fall.   The final gapfill prices are posted by CMS, typically around October.  

(Technically, there is an appeal process but it is not well-specified and involves contacting CMS and discussing the grievance with the final MAC price (which is very rarely changed).)

Submitting Information

Generally, the MolDx program has taken the stance that if it wants information from outside the walls, it will contact you.   The Novitas and FCSO MACs generally post a web portal for the structured submission of answers to questions.   The NGS MAC in the past has posted a brief article about information gathering and submission.

Gapfill prices are set by the median of the MAC prices (based on a count of CLFS regions, of which there are 57 for dusty legacy reasons).  MolDx MACs have the majority of regions and therefore control the final median.  

FCSO, Novitas Mac Websites

The FCSO MAC has posted a website or portal through which it accepts information on gapfills.  It has a deadline of February 13.

Similarly for Novitas and also by Feb 13:


AMA Posts Winter Quarter 2023 PLA Code Proposals

AMA takes quarterly applications for PLA codes (proprietary laboratory analyses) and rapidy shepards them through a review committee and onward to a CPT panel vote to endorse them.

The PLA home page is here.  The January 2023 submissions, which are racing through the approval process and towards the February CPT meeting in San Diego, are here.

8 drug/metabolite panels are being deleted (0143U-150U) as well as a prostate multi marker FISH test 0053U.

There are about 18 new codes.  Several codes have ownership or lab name revisions.


The latests update PDF for finalized codes is here.  

These run 0355U-0386U, and this PDF tallies up the PLA codes too new to have been published in the CY2023 CPT code book.   With the codes under consideration, clearly the PLA series will exceed 400 codes for the April 1 release.

Wednesday, January 18, 2023

Very Brief Blog: ChatGPT DIscusses Lab Industry Challenges, Outlook for Clinical Genetics

In the selections below, I asked ChatGPT three questions about the U.S. lab industry.   1) Challenges facing the industry, 2) How AI might impact the lab industry, 3) Five-year outlook for clinical genetics.   The answers, with bullet point issues and paragraph summaries, are all computer generated in a few seconds.


ChatGPT website (portal often closed due to excess traffic): .  Wikipedia .   An article about ChatGPT here (there are 1000's of articles).


Write a fifteen minute talk about challenges facing the U.S. clinical laboratory industry.

The U.S. clinical laboratory industry is facing a number of challenges in the current healthcare landscape. Some of the main challenges include:

  1. Reimbursement cuts: The industry has been facing cuts in reimbursement from Medicare and private payers, which has led to a decrease in revenue for laboratories.
  2. Increased competition: With the rise of diagnostic companies and the increase in the number of laboratories, the industry has become increasingly competitive. This has led to a decrease in prices and margins for laboratories.
  3. Regulatory challenges: The clinical laboratory industry is heavily regulated by the Centers for Medicare and Medicaid Services (CMS), the Food and Drug Administration (FDA), and other government agencies. Staying compliant with these regulations can be a challenge and can be costly.
  4. Technological advancements: New technologies such as next-generation sequencing, digital pathology, and artificial intelligence are emerging in the laboratory industry. While these technologies offer many benefits, they also require significant investment and can be difficult to implement.
  5. Shortage of skilled workforce: There is a shortage of skilled laboratory professionals in the U.S. This can make it difficult for laboratories to find and retain qualified staff.
  6. COVID-19: COVID-19 has had a significant impact on the clinical laboratory industry. The demand for testing has increased dramatically, and laboratories have had to adapt to the new challenges posed by the pandemic.

Despite these challenges, the clinical laboratory industry plays a vital role in the healthcare system and will continue to be important in the future. To address these challenges, the industry will need to focus on cost-effective solutions, invest in new technologies, and work to attract and retain a skilled workforce. Additionally, the industry will need to work closely with government agencies to ensure compliance with regulations while also advocating for fair reimbursement policies.

Continues after break.

Very Brief Blog: ChatGPT: Performs Well on U.S. Medical Board Exams

The suddenly wildly popular application, ChatGPT (AI to answer nearly any question, with answers in paragraph form) will perform well on U.S. medical board exams.

In many situations, the ChatGPT achieved a passing grade of around 60%.   ChatGPT also provided justification for its answers or rationale for which answers it knew were incorrect.

The accuracy was higher on Step 3 (clinical care) than Step 1 (basic science), possibly due to undertraining on the Step 1 material.

The article by Kung et al. is online at MedRxiv:

Note that the third of twelve authors is the machine, ChatGPT.

click to enlarge