Sunday, December 17, 2023

Excellent Article about Alz Diagnostics - And Some Additional Points

Mara Aspinall, a Partner at Illumina Ventures and a Professor in Biomedical Diagnostics at ASU, edits and writes "Sensitive and Specific: The Testing Newsletter," available at Substack.

[By the way, Substack has an interesting business model.]

Over at S&S, Aspinall and Ruark have just published, first, a special report on Alzheimer diagnostics, and second, a broader special reporton neurologic diagnostics.  Find them here:

https://sensitiveandspecific.substack.com/p/special-report-alzheimers-disease

https://sensitiveandspecific.substack.com/p/neurologic-disease-diagnostics-special

Not mentioned, newly clinically (commercially) available diagnostics for alpha-synuclein by special protein aggregation assays - see Siderow in Lancet here, 1100 patients, and see the lab Amprion here.

On the side of Alzheimer diagnostics, they quote Mattke e al. that the vast majority of minimal cognitive impairment MCI cases are undiagnosed:



The first Alzheimer tests, a decade ago (Amyvid), were PET scan tests not paid for by Medicare (until very recently).  More recently, FDA CSF tests have been approved.   However, patients show a drop-off in willingness to take CSF tests (and CMS bundles the costs of proteomic testing of CSF tests and the costs of special PET tracers, when they are taken in a hospital outpatient setting, where they often are).  


Aspinall and Ruark remark, "The first potentially curative treatments for Alzheimer’s disease were approved this year. These developments have dramatically increased interest in developing diagnostics that are less invasive than a CSF spinal tap, cheaper than imaging, and more accurate than either."

The Accuracy of Alzheimer Tests - A Fierce Problem

Alzheimer tests are one of the worst examples of spectrum problems in diagnostics, a topic that's been discussed for decades.   Since the 1990s, researchers have regularly touted accurate new Alzheimer tests, but by testing the "tails" of the population.  This means 25 perfect healthy patients (75 years old, working at a bank, doing their own taxes, and playing tennis every weekend) with 25 severe Alzheimer patients (several years in a nursing home, profound dementia.)  These two populations are easy to tell apart.   But they aren't real test candidates, who would be the 72 year old who has mild memory concerns and a little confusion when driving.  Not only are those patients less studied in pilot tests, proof of concept tests, but there's no gold standard for their real diagnosis, either.

So  "spectrum effect" is a problem, and often makes a test look better than it is.

The second problem is summary statistics.  If we're told Text X is 90% sensitive and 90% specific, that's misleadingly useless unless you know the population tested, which you probably don't.

The third problem is a test is not a test.   Let's say you have a test for phospho-Tau at position 217 (p217tau).   You can't say "Such tests are 90% sensitive and 90% specific."  Different tests for p27tau could have quite different results, depending on the accuracy of the monoclonal used in that test.  And it's probably two monoclonals in an immunoassay, a binding one and a detecting one, both of which may different from one company to another.  These changes can push the data around quite a bit, so it's a fallacy to conclude something like ("p200tau is better than p300tau by 8%.")  

So, in conclusion, if you hear a particular Alzheimer biomarker is summarized in a review as "90% sensitive and 90% specific" you know nothing til you know something about the population tested and the particular monoclonals used.