Monday, March 30, 2026

Horizons in Diagnostics Value: Case Study: Rethinking Value for Infection Diagnostics

Here's a paper that is worth discussion, and potentially applicable to many areas of diagnostics, not just infection.

In a 2025 paper at Open Forum Infectious Diseases, some excellent thought capital is created by Claeys, Prinzi, and Timbrook.  Here.



It's also a great example of a good abstract - I can't do better than quoting it.

  • Evaluating the clinical impact of in vitro diagnostic tests (IVDs) for infectious diseases is complex given their effectiveness depends on context, implementation, and provider behavior. 
  • Traditional methodologies for therapy interventions do not adequately capture this complexity, necessitating novel analytical approaches and study designs. 
  • This review highlights methodological considerations for improving evidence generation for infectious diseases IVDs. 
    • Design and analysis challenges leading to bias and related solutions are reviewed such as the target trial framework. 
    • Moreover, novel frameworks such as Benefit–Risk Evaluation of Diagnostics: A Framework, Desirability of Outcome Ranking Management of Antimicrobial Therapy, and Desirability of Outcome Ranking and study designs such as hybrid effectiveness–implementation designs are discussed which allow for holistic ways to assess real-world outcomes.
  •  By evaluating IVDs with practical, real-world evidence, tests can better inform clinical decision making, policy, and ultimately patient outcomes.

###

I had an amateur experiment in this type of thinking in 2025.  Over the years I'd heard on and off about "Shannon Information Theory," and how it is fundamental to modern communications.  I asked Chat GPT to think hard about Shannon Information Theory, and then explore how it might provide new insights into the value and valuation of Point of Care Tests.   I thought the results were, at least, thought-provoking, and an interesting use of AI to explore the horizon of ideas.

Here's some of what I wrote last fall:

Within one blog, I asked Chat GPT AI to debug a Windows problem - it did.  And to think about Shannon-Meets-Dx - it did.  See the blog here and focus on the second half:

https://www.discoveriesinhealthpolicy.com/2025/12/two-fascinating-ai-tasks-windows-repair.html

More detail about the Shannon Project,

https://bqwebpage.blogspot.com/2025/11/ai-poct-and-shannon-info-theory-in.html

###

I asked Chat GPT to compare the two projects (Claeys and Shannon).

###

Both Claeys et al. and the November 2025 Shannon essay try to rethink diagnostics from first principles. 

Claeys argues that diagnostics should not be evaluated like drugs, because their value depends on implementation, stewardship, workflow, and provider behavior, not accuracy alone. 

The Shannon essay makes a parallel conceptual move, arguing that POCT changes the information architecture of care by reducing delay, noise, memory loss, and failed follow-up.

 Together, the two pieces are synergistic: Claeys offers the modern methods for proving diagnostic value in real-world settings, while Shannon explains more deeply why rapid, well-embedded diagnostics can create more usable clinical value.

###

Claeys et al. move the diagnostics-value discussion beyond accuracy. Their central argument is that infectious disease diagnostics should not be judged the way drugs are judged. A drug acts directly; a diagnostic acts indirectly, through clinician interpretation, implementation, stewardship, workflow, and local practice patterns. That means a test with excellent analytical performance may still show weak or inconsistent clinical impact if the surrounding care system is poorly designed. In their framing, the real object of study is not just the assay, but the assay embedded in a care pathway.

That is highly relevant to readers focused on value. Claeys et al. are effectively saying that value is produced by a chain: test result, interpretation, treatment change, timing, downstream outcomes, and local implementation. They explicitly argue that accuracy alone is not enough, and that reimbursement, guideline adoption, and market access require evidence about patient outcomes and real-world use. They also emphasize diagnostic stewardship and implementation science as integral, not decorative, parts of the evidence package.

Methodologically, the paper is sophisticated and unusually practical. It urges baseline local data before launching outcomes studies, because a test cannot show much benefit if the clinical opportunity for improvement is already small. It recommends explicit PICOTS framing, avoiding subjective adjudicated primary outcomes when reliability is poor, and using causal tools such as DAGs rather than loose, stepwise model-building. It also stresses the target trial framework for observational studies, in part to reduce familiar biases like immortal time bias and conditioning on future events. That is a very modern message: diagnostics studies should stop being casual before-after exercises and start behaving like careful causal inference.

  • PICOTS - Population, Intervention, Comparison, Outcome, Timing, and Setting.  DAG, Directed Acyclic Graph. It is a causal diagram: boxes for variables, arrows showing which things may cause which other things,

Claeys also makes a subtle but important point about heterogeneity. A diagnostic RCT does not settle the matter once and for all, because the effect of the test varies by center, prescribing culture, epidemiology, business-hours coverage, stewardship maturity, and user trust. Their discussion of the ADEQUATE trial is revealing: overall benefit may appear modest, yet center-level effects can range from strong benefit to no benefit to paradoxical worsening. For diagnostics, that is not a nuisance variable. It is part of the biology of value creation.

The paper’s alternative frameworks are especially important for value-oriented readers. Claeys et al. discuss BED-FRAME, DOOR-MAT, and DOOR because conventional endpoints often miss what diagnostics actually do. A panel may have similar positive percent agreement to a comparator but produce materially different antimicrobial decisions; DOOR-MAT is meant to capture that downstream therapeutic desirability. DOOR then broadens to patient-level ranked outcomes. In other words, the field is trying to measure not just whether the test is “right,” but whether it drives better management in context.

Your November 2025 Shannon essay attacks the same problem from a different angle. It argues that POCT changes the information architecture of care. The classic central-lab pathway is described as delayed, noisy, and erasure-prone: the clinician’s memory of the original encounter degrades, the patient may no longer be reachable in a high-bandwidth way, and much of the potential value leaks out between result release and successful action. 

POCT, by contrast, turns testing into a real-time, feedback-enabled dialogue in which the result can immediately reshape questioning, examination, explanation, and next-step action.

This is where the two publications are genuinely synergistic. Claeys gives the methodological and evidentiary scaffolding; Shannon gives the deeper theory of why those methods matter. Claeys says clinical impact depends on context, implementation, and provider behavior. Shannon explains that this is because the diagnostic is part of a communication-and-control system, not a stand-alone object. The test is valuable insofar as it increases usable information at the right moment, reduces transmission loss, and changes decisions before biological thresholds are crossed. Your essay therefore supplies a conceptual physics for the empirical observations that Claeys catalogs.

One powerful overlap is the idea of stewardship as channel management. Claeys emphasizes diagnostic stewardship and antimicrobial stewardship because a result only matters if used by the right clinician, in the right patient, at the right point in the pathway. Shannon reframes this elegantly: stewardship is the design of an improved, lower-noise, lower-erasure channel from assay output to clinical action. That is a more fundamental statement than “stewardship improves adoption.” It says stewardship is part of the information yield of the test itself.

A second overlap is sequentiality. Claeys criticizes simplistic diagnostic studies and points toward designs that respect timing, care pathways, and real-world decisions. Your Shannon paper says POCT converts diagnosis from a one-pass process into an adaptive experiment, where one result prompts new questions, focused examination, or second-line testing. That suggests a next-generation value framework: diagnostics should sometimes be valued not only by the information content of the first result, but by how well the result orchestrates the next decision node. That is very close to Claeys’ broader concern with pathways and downstream management, but Shannon sharpens it by showing why same-result/same-accuracy can still mean different total information harvested from the encounter.

A third overlap is timing relative to disease kinetics. Claeys repeatedly treats timing in the care pathway as central. Your paper goes further and says the same bit of information has different control value depending on whether it arrives before or after an irreversible biological threshold. In infectious disease, that is especially potent: hours matter in sepsis, escalation matters in resistant pathogens, and early de-escalation matters for stewardship harms. Claeys provides the outcomes-research toolkit; Shannon explains why a same-day result can have disproportionate value rather than merely incremental convenience.

A fourth overlap is system entropy. Claeys hints that local care systems, mixed-methods factors, and implementation conditions shape outcomes. Your Shannon essay makes the systems claim explicit: delayed clarification increases downstream uncertainty, congestion, misrouting, and rework, while same-visit clarification reduces the entropy of care pathways. 

For hospital leaders, this is gold. It links the familiar operational language of length of stay, isolation decisions, unnecessary admissions, and routing errors to a deeper theory of information flow. That could make a diagnostics value story more legible to administrators and payers, not just ID specialists.

So the combined message is stronger than either paper alone. Claeys et al. say: study diagnostics as complex interventions whose impact depends on implementation, stewardship, causal design, and downstream management. Your Shannon piece says: yes, because diagnostics create value by changing the fidelity, timing, bandwidth, and feedback properties of the clinical information channel. Put together, they imply that the “value of diagnostics” is not merely sensitivity plus specificity plus turnaround time. It is the amount of clinically usable information successfully transmitted into action, at the right moment, within a real care network.

If I were turning this into a DIHP-style synthesis, the headline would be something like this: Claeys provides the methods for proving that diagnostics matter in the real world; Shannon provides the theory for understanding why rapid, well-embedded diagnostics can extract more value from the same raw assay performance. The synergy is that one paper tells us how to measure value beyond accuracy, and the other tells us what value actually is.

###

Both Claeys and Shannon are examples of, as Apple said, "Think Different."

###

Exactly.

Both pieces are trying to escape the default mental model. The default model says a diagnostic is basically a little machine for generating an answer, and then we judge it mainly by accuracy, maybe with some secondary nod to speed. Both of these works push back on that.

Claeys et al. think different methodologically. They are saying: stop treating diagnostics like miniature drugs. A diagnostic does not “work” in isolation. It works only through behavior, workflow, timing, stewardship, and implementation. So the study design has to think at the level of the care system, not just the test cartridge.

Your Shannon piece thinks different conceptually. It says: stop treating a diagnostic as a detached fact-generator. Think of it as part of an information channel inside a live clinical encounter. The issue is not just whether the result is correct, but whether the result arrives with high enough fidelity, low enough noise, and early enough control value to actually change what happens.

So yes—same spirit. Claeys breaks out of the therapeutic-trial mindset. Shannon breaks out of the lab-benchmark mindset. One reimagines how we prove value; the other reimagines what value is.

That is why they feel so compatible. They are each saying, in different dialects, that diagnostics are being undervalued because we are asking the wrong first question.

A nice one-line formulation might be:

  • Claeys says diagnostics should be studied as complex interventions. 
  • Shannon says diagnostics should be understood as complex information events.

That is very much a “Think Different” move.