In pathology and laboratory medicine, in 2021, we had the FDA approval of AI-assisted prostate cancer interpretation. Here, here. (For PaigeAI, see the September 2021 classification order on this index page but the FDA decision summary is not up yet as of late January 2022).
I tend to think that in this area, radiology is just running a couple years ahead of pathology. Quoting an article January 24, 2022, in AuntMinnie, a radiology news web site.
Imaging services provider RadNet has acquired Dutch artificial intelligence (AI) software developers Aidence and Quantib in deals that the company hopes will enable wide-scale screening programs for lung and prostate cancer....RadNet's acquisitions of Quantib and Aidence reflect a strong trend toward the need to establish one platform for AI, informatics expert Dr. Erik Ranschaert, PhD, told AuntMinnie.com. "Also, U.S. companies appear to have a keen interest in buying European AI developers at the moment," said Ranschaert, a radiologist from St. Nikolaus Hospital in Eupen, Belgium, and a visiting professor at Ghent University.
See the full article (might require free registration):
Footnote - today 360Dx has an article on AI in Pathology (subscription):
Quoting in brief, "German artificial intelligence-based software company Mindpeak has developed a proprietary machine learning method to fully automate the process of digital pathology that can also be integrated with other laboratory software to help clinical laboratories streamline their processes. As its base technology, Hamburg-based Mindpeak uses a proprietary hybrid deep learning method and combines that with its AI pre-trained with pathology data that doesn't have annotations from pathologists."
2022 Pathology - AI Headlines
- Regarding FDA and Pathology AI, there's the de novo approval for PAIGE last September (DEN200080, for which the Summary of Effectiveness is not yet posted) and Verily is still percolating on its 2019 Breakthrough Status for Pathology AI (here).
The PANDA challenge is available at Nature Medicine as Bulten et al., open access:
Abstract of Bulten et al. here:
Artificial intelligence (AI) has shown promise for diagnosing prostate cancer in biopsies. However, results have been limited to individual studies, lacking validation in multinational settings. Competitions have been shown to be accelerators for medical imaging innovations, but their impact is hindered by lack of reproducibility and independent validation.
With this in mind, we organized the PANDA challenge-the largest histopathology competition to date, joined by 1,290 developers-to catalyze development of reproducible AI algorithms for Gleason grading using 10,616 digitized prostate biopsies. We validated that a diverse set of submitted algorithms reached pathologist-level performance on independent cross-continental cohorts, fully blinded to the algorithm developers. On United States and European external validation sets, the algorithms achieved agreements of 0.862 (quadratically weighted κ, 95% confidence interval (CI), 0.840-0.884) and 0.868 (95% CI, 0.835-0.900) with expert uropathologists.
Successful generalization across different patient populations, laboratories and reference standards, achieved by a variety of algorithmic approaches, warrants evaluating AI-based Gleason grading in prospective clinical trials.