On September 27, 2022, FDA releases a number of guidances for the device and software industries, including an important guidance on when "Clinical Decision Support" software crosses the line between "Non device functions" not regulated by FDA, and "Device functions" which ARE regulated by FDA.
- See the FDA 26pp guidance here.
- See story at Stat, including the headline:
- "FDA says AI tools to warn of sepsis should be regulated as devices."
- See a tweet on this by Tim Sweeney, CEO of Inflammatix.
- See a review of several of the guidances from FDA, at PharmaPhorum here.
- Also discusses "Pre Cert" program for software.
- Foley-Lardner viewpoint at JD Supra here. A longer article, also from Foley Lardner, at JD Supra here.
- BioWorld says that leading FDA expert Brad Thompson at Epstein Becker dislikes some features of the final guidance. See his opinion in more detail ("disaster!") at RAPS open access here.
- Thompson remarks: “FDA clearly wants to block what Congress has done, to the point where it is reaching nonsensical conclusions about what statutory language means.”
In overview, the guidance gives four core criteria that are the major rules applied. These come direct from 21st Century Cures, 2016 (Sec. 3060). In an informal summary, these are:
- Software does not analyze/process a medical image or lab test pattern;
- The purpose is [just] displaying or analyzing medical information about the patient including clinical practice guidelines;
- Provides the health care professional with [just] "support or recommendations" for diagnosis and treatment;
- HCP independently reviews the recommendation, its basis, and does not "rely" on the software for diagnosis or treatment decisions.
Each of these rules is longer in the original FDA format and each rule gets a couple pages of discussion. Of particular interest, Section "V" provides examples of software that is exempt, due to "non device functions," and examples of software under FDA review because the software includes "device functions."
Besides the 4 rules, and their subsequent discussion, FDA provides several pages of fictional examples of software that DOES, or DOES NOT, fit into the exemptions from review. FDA in particular elaborates on its requirement that the HCP "reviews the recommendation" and does not "rely."
Among the examples, as in the STAT article cited above (in the context of sepsis, for example) FDA talks about how the rules could foreseeably interact with advanced alert functions embedded in EHR software like EPIC.
CDx Tests and MAAA Risk Tests
There's a rub in one of the examples of an allowed function, "recommends the HCP consider one or more legally marketed companion diagnostic tests," making LDTs a little on-edge (of course, most of us would say LDT tests are currently legally marketed, quote unquote.) Page 19.
Another highlight to think about is that many LDT MAAA tests (or polygenic risk scores, PRS tests) predict the risk for a specific disease or condition. FDA writes that "software that provides information that a specific patient 'may exhibit signs' of a disease or identifies a...risk score for a disease or condition provides a specific preventive, diagnostic, or treatment output. Therefore, such software would NOT satisfy Criterion 3" [= NOT be exempt from this FDA software device review.]. Page 12-13.
The FDA guidance lists 4 core rules, then discusses at length what they mean. Then, the FDA provides pages of examples of software that either does, or does not, fit in the rules. In a new book, Lorraine Daston talked about "Rules: A Short History," and this interplay between written rules, implementation, and examples (which she calls "paradigms," to describe by example.) Daston is an American historian of science based in Berlin. She is married to noted cognitive psychologist Gerd Gigerenzer, an expert on human decision-making.
See a 2019 story at Fierce Healthcare, "1 in 3 misdiagnoses results in serious injury or death," here.