With a Linked In posting and a new 2026 white paper, Friends of Cancer Research (FoCR) addresses how we should evaluate - and then use - AI-enabled tumor assessment tools. By which they mean, radiology tools (like RECIST).
Find the posting here, with an embedded summary deck (6p) as well as a link out to the full white paper (17pp).
Sidebar:
Adjacent, Linked In gave me an article on "pathomics" (quantitative pathology) by Swarnagouri Naganathanhalli at Johns Hopkins. Here.
Sidebar:
See also: My blog on AMA new codes, AMA approves Cat III code for AI-assisted PET tumor sizing and longitudinal mapping. Here.
###
AI CORNER
###
AI Tumor Measurement in Cancer Trials: What This New White Paper Actually Says
Overview. A new 17-page white paper from Friends of Cancer Research (2026) explores how artificial intelligence might change the way tumors are measured in oncology clinical trials. The short version is that the paper focuses almost entirely on AI analysis of radiology images—CT and MRI scans—not histology or digital pathology. The goal is to explore whether AI analysis of scans could eventually supplement or replace the longstanding RECIST system used in drug trials to measure tumor response. The document does not propose immediate regulatory change or introduce a specific algorithm. Instead, it lays out a roadmap for how the oncology ecosystem might validate and adopt AI imaging tools as clinical-trial endpoints. For most readers, it is best understood as a conceptual framework paper rather than a technical breakthrough.
Scope of the paper. One important clarification is what the paper does not cover. The authors limit their discussion to radiologic tumor assessment. AI tools discussed in the document analyze imaging data from CT or MRI scans, detecting tumors, segmenting them, tracking them over time, and quantifying tumor burden. The paper does not address AI applied to pathology slides or digital histology, which is a separate and rapidly growing field. The focus on radiology is deliberate because the current standard system used in oncology trials—RECIST—is itself based on radiology measurements.
The RECIST system. RECIST, or Response Evaluation Criteria in Solid Tumors, has been the dominant framework for evaluating tumor response in clinical trials for more than twenty-five years. Under RECIST, a small number of representative tumors are selected as “target lesions,” and radiologists measure the diameter of those tumors on imaging scans over time. Tumors are then classified as shrinking, stable, or progressing. The approach was originally designed for simplicity and reproducibility across clinical trials, but it has obvious limitations. Only a handful of lesions are measured, the measurements are one-dimensional rather than volumetric, and the process relies on human interpretation, which introduces variability. Perhaps most importantly, RECIST endpoints such as objective response rate and progression-free survival often correlate only imperfectly with overall survival, the ultimate clinical outcome regulators care about.
Why change RECIST. The white paper argues that advances in imaging and computing now make it possible to move beyond these constraints. Artificial intelligence can analyze scans in ways that were simply impractical when RECIST was developed in the 1990s. One relatively conservative application would be AI-assisted RECIST, in which algorithms help radiologists identify tumors and measure them more consistently. In this scenario, AI does not replace human interpretation but instead automates repetitive tasks and reduces measurement variability.
Volumetric measurement. More ambitious approaches involve abandoning one-dimensional tumor measurements entirely. AI systems can measure three-dimensional tumor volumes, allowing the entire tumor burden across the body to be quantified rather than focusing on a few selected lesions. In principle, this could provide a more accurate picture of disease progression and treatment response. Because tumor volume changes exponentially rather than linearly, volumetric measurements may detect treatment effects earlier than RECIST measurements based on diameter.
Radiomics. Another area discussed in the paper is radiomics, a technique in which AI extracts large numbers of quantitative features from medical images. These features can capture patterns such as tumor texture, vascular architecture, internal heterogeneity, and relationships with surrounding tissues. Researchers believe these imaging signatures may reflect underlying tumor biology, potentially revealing early signals of treatment response that simple size measurements cannot detect.
Growth kinetics. The paper also highlights the potential of modeling tumor growth kinetics. Instead of treating each scan as a static snapshot, AI models can analyze how tumors grow or shrink over time, estimating growth rates and response trajectories. This type of analysis could provide a more dynamic understanding of tumor behavior and help distinguish meaningful treatment effects from normal variability in tumor measurements.
Implications for drug development. The motivation behind these ideas is the possibility of improving clinical trial design. Early-phase cancer trials often involve relatively small patient populations, yet RECIST measurements are relatively crude indicators of response. Because RECIST endpoints correlate imperfectly with long-term survival, important decisions about drug development are sometimes made using weak signals. The authors suggest that AI-derived imaging measurements could potentially provide earlier and more biologically meaningful indicators of treatment benefit, enabling faster go-or-no-go decisions and more efficient clinical trials.
Regulatory pathway. A substantial portion of the paper focuses on the regulatory pathway required for such tools to become accepted endpoints in drug development. The authors outline a process similar to other biomarker qualification efforts. First, a specific context of use must be defined—for example, a particular cancer type or treatment setting where improved imaging endpoints could make a difference. Next comes analytical validation, demonstrating that the AI tool measures tumors consistently and reproducibly. This must be followed by clinical validation, showing that the measurement correlates with meaningful patient outcomes. Ultimately, large meta-analyses across multiple clinical trials would be needed to demonstrate that the new measurements predict survival or other key outcomes better than existing endpoints.
Historical precedents. The authors point to historical examples of biomarker qualification to illustrate the process. In breast cancer, pathologic complete response eventually became accepted as an early endpoint in certain neoadjuvant trials. In multiple myeloma, minimal residual disease measurements followed a similar trajectory. In both cases, years of collaborative research, standardization, and pooled analyses were required before regulators accepted these markers as credible indicators of treatment benefit. The paper suggests that AI imaging biomarkers may need a similar development pathway.
A regulatory nuance. An interesting nuance appears in the discussion of how these tools might initially be used. When AI measurements are used solely to analyze clinical trial endpoints—rather than to guide treatment decisions for individual patients—they may not require traditional FDA device clearance. In other words, AI tools could potentially be deployed in clinical trials before they are widely used in routine clinical care.
Tone of the paper. Despite the excitement surrounding artificial intelligence, the overall tone of the white paper is cautious. It does not claim that AI tumor measurement is ready to replace RECIST today. Instead, the document emphasizes the need for standardization, shared datasets, cross-tool validation, and collaboration among sponsors, regulators, imaging experts, and technology developers. Much of the paper is devoted to outlining methodological questions that must be resolved before AI-based imaging endpoints could become part of regulatory decision-making.
Bottom line. For readers deciding whether to tackle the entire document, the essential message can be summarized simply. First, the paper is about AI analysis of radiology images, not AI analysis of histology slides or digital pathology. Second, the current RECIST system for measuring tumor response is widely recognized as limited and somewhat outdated. Third, AI-based imaging analysis offers the possibility of richer and earlier indicators of treatment response, but substantial validation work will be required before regulators accept these measurements as formal endpoints in clinical trials.
Bigger picture. The broader significance of the paper is that it signals growing alignment across the oncology research ecosystem. Pharmaceutical companies, imaging specialists, technology firms, and regulators increasingly recognize that the traditional way of measuring tumors may not be adequate for the era of precision oncology. AI-enabled imaging tools could eventually modernize clinical trial endpoints, potentially making cancer drug development faster, more informative, and more efficient. For now, however, the field is still at the stage of defining standards and building the evidence needed to support that transition.