Tuesday, July 19, 2016

PAMA Lab Panel Member Makes Gross Error; Nobody Corrects Him

On July 18, 2016, CMS held its annual meeting on policy for the Clinical Laboratory Fee Schedule, paired with an annual meeting of the Clinical Laboratory Advisory Panel created by PAMA.  Sessions are archived on Youtube, here and here.



The agenda walked through new laboratory codes for 2017, so stakeholders, CMS staff, and the PAMA panelists could discuss crosswalk/gapfill recommendations.  This is not necessarily the most exciting day of the year.

However, I was amazed by statistical errors made repeatedly by one of the PAMA advisory board panelists - and even more amazed that none of the other dozen panels corrected him.  More after the break.

One of the things I've learned as a policy consultant for medtech and biotech companies is that often, payer medical directors have an extremely shaky foundation in basic statistics.   You can't assume anything; and you can't explain statistical concepts too slowly.

It's worse when a reviewer thinks they know something and gets it flat wrong, and loudly.

This happened at the PAMA panel.   One of the new tests had a scientific presentation, showing the increased accuracy of the test over a legacy biomarker.  This is demonstrated by increased area under the curve (AUC) and appropriate statistics showing a significant difference.

Remarkably, one of the panelist asserted several times that when the tips of the plus minus 2 SD range of two populations overlap, the populations are statistically the same.

Of course, this is nonsensical.

Let's take two examples.   In the first, you have a choice of two bets.  Bet A is a ticket for a raffle where the average payout is $50, with a normal distribution of two SD plus minus $50.   Bet B is a raffle ticket where the average payout is $150, also with a normal distribution of two SD plus minus $50.   The tips of the two-SD distributions touch at $100.  Are the two bets of equal value?

In a biological example, you have terminal cancer.  Drug A provides six months increased survival, plus minus two SD of six months.   Drug B provides 18 months increased survival, also plus minus two SD of six months.  Is it a toss up which drug you would rather take?

The answer is obviously no.  In both the betting example and the drug example, populations A and B are highly statistically different.   The chance that the real mean of Sample A population is the same as the real mean of Sample B population  is the chance that at the same time, one real mean is 2 SD high (1/30) and the other mean is 2 SD low (1/30), with respect to the standard error of the mean (SEM=SD/√[n]), or for illustration purposes only, 1 in 900.



There is a special condition where we look to the confidence interval crossing zero, and we compare observations to the null hypothesis.  But the null hypothesis is not a distribution; it is a point value at 0 with no SD (or SEM) of its own.   So if,for example, there is a 10% chance the population includes the 0 point, the chance that the population is not different from the 0 point is > .05.

The PAMA speaker (who was persistent and convinced of his point) maintained that if two normal distributions have overlapping SD's at their far tips, they cannot be considered statistically different. While the details vary with the math, kurtosis, skew, and other factors, in general, this is completely wrong.

The amazing thing was that the other dozen panelists listened to this, and not one of them, nor CMS staff, pointed out the error.  Many of the audience were aware of the error, but the panel discussion was not open to audience comments.