Can AI read an article and project its possible applications into a different field? That's today's question.
Starting point: WSJ runs an essay by Harvard economics professor and Manhattan Institute authority Roland Fryer. Fryer here, essay here.
While his article was on "regulating AI," it clearly had ramifications or applications in other policy domains. I asked Chat GPT 5 to read the essay and discuss its projection onto healthcare policy such as CMS. I deliberately left my main initial request vague.
#### [Start Chat GPT 5]
- Fryer argues that regulation often fails because agencies can’t observe true risk or behavior, leading to paperwork and unintended harm. Instead of one-size rules, regulators should offer structured choices that induce firms to reveal themselves.
- Applied to CMS, this suggests shifting from rigid mandates toward menu-based coverage, transparency, and risk-sharing frameworks that better distinguish high-quality actors from risky ones.
###
Fryer’s op-ed is really an argument about regulatory design under ignorance. His core point is not simply that regulators are clumsy. It is that in complicated systems, the regulator usually cannot directly observe true risk, true cost, or true behavior, while the regulated party can. When that happens, blunt rules, generic reporting mandates, and one-size-fits-all oversight often backfire. They produce paper compliance, defensive retreat, and less socially useful activity rather than safer or better activity.
Fryer's solution is to produce two or more policy options (that the regulatory can implement simultaneously but in different places or companies) and let companies choose their regime. He discusses in the context of AI policy.
That logic applies very directly to the health sector, and especially to payors such as CMS, because CMS constantly regulates actors whose real capabilities and motives it can only partly see: hospitals, labs, insurers, physician groups, AI vendors, MA plans, and drug/device makers.
In health care, CMS often faces exactly the information problem Fryer describes. CMS may want better diagnostic accuracy, less overtreatment, less fraud, more equity, faster adoption of useful innovations, and lower spending. But it usually cannot fully observe, in real time, whether a lab test is clinically valuable, whether an AI triage tool is safe, whether a hospital admission was truly necessary, whether a prior authorization denial was medically justified, or whether a coding pattern reflects fraud, improved documentation, or simply sicker patients.
The result is a familiar pattern: CMS and its contractors rely on checklists, documentation rules, coding edits, audit triggers, coverage criteria, utilization thresholds, and penalty structures. Those tools are not irrational, but they often function as the health-policy equivalent of the Illinois AI notice Fryer mocks: they generate records, not truth.
The most obvious application is to coverage and payment policy for novel technologies, including AI-enabled diagnostics and clinical decision support. CMS often responds to uncertainty by demanding more evidence, narrowing indications, or delaying coverage until it feels more confident. That is understandable, but Fryer’s framework suggests that such caution can unintentionally discourage the very firms most capable of producing useful evidence. A company with a genuinely strong product may decide the reimbursement uncertainty, coding ambiguity, audit risk, and evidence-development burden are too high relative to expected payment. Meanwhile, a company with a weaker product may still enter because it has less reputational risk or is willing to exploit ambiguity. In other words, poorly designed regulation can create adverse selection: the better actors exit, the more aggressive actors remain.
This is especially relevant to CMS coverage of diagnostics. A flat regime that says, in effect, “everyone must produce extensive evidence before broad coverage, and everyone faces similar noncoverage risk if they fail,” does not distinguish well between a high-quality developer that is ready to be transparent and a marginal developer that prefers opacity. Fryer’s idea for "policy menus" suggests a different approach. CMS could create multiple regulatory/payment tracks. One track could offer earlier, broader, or faster coverage for firms willing to accept intensive transparency: real-world evidence reporting, registry participation, standardized outcome disclosure, external analytics access, and perhaps post-market utilization audits. Another track could allow a more limited path with less up-front transparency but tighter payment, narrower indications, or stronger clawback exposure if later evidence disappoints. The point would be to make a firm’s willingness to reveal data and accept measurement itself informative.
That same logic applies to Medicare Advantage plans, where CMS struggles to observe true plan behavior. CMS can see denial rates, appeals outcomes, coding intensity, network breadth on paper, and quality scores. But it cannot perfectly observe whether the plan is using prior authorization to block necessary care, whether its risk coding reflects true morbidity, or whether its networks are practically usable. So CMS often responds with prescriptive requirements, audit programs, encounter data validation, and star-measure incentives. Yet these tools can be gamed or can induce defensive behavior. A plan may become excellent at documentation, grievance handling, and coding optimization without materially improving patient care. That is exactly Fryer’s complaint: regulation can end up rewarding the production of observable proxies rather than the underlying social goal.
A Fryer-style redesign for payors would not mean deregulation. It would mean structured self-selection. Imagine CMS offering MA plans a choice. One track could involve deep claims and prior-auth transparency, disclosure of internal denial algorithms, external benchmarking, and rapid review of overturned denials, in exchange for lighter ex post sanctions or more flexibility in benefit design. Another track could allow less operational transparency but impose harsher automatic financial consequences if subsequent audits show excessive inappropriate denials, coding inflation, or network failure. Plans that truly believe their internal systems are sound might choose the transparent track. Plans with more aggressive or fragile practices might avoid it, thereby revealing something important. That is much closer to the Baron-Myerson intuition than today’s usual approach of layering mandates on everyone equally.
The op-ed is also highly relevant to AI regulation in health care payment operations. CMS and other payors are increasingly confronting AI in claims review, utilization management, risk adjustment, fraud detection, and quality measurement. The temptation will be to regulate AI with broad definitions and blanket process requirements: notices, attestations, governance committees, annual bias reviews, and perhaps incident-reporting obligations. Some of that will be necessary. But Fryer would say these rules are likely to generate compliance theater unless they create incentives for truthful revelation of risk. In health care, the central issue is often not whether “AI” is used, but whether the system meaningfully changes decisions in ways that create patient harm, payment distortion, or discrimination. A statute or CMS rule that merely requires disclosure that “AI is used” in prior auth or coding review tells us little. A system that instead gives organizations a choice between audited transparency with reduced burden and operational opacity with stronger liability or payment risk may surface more real information.
There is also a strong application to fraud and abuse enforcement. CMS, OIG, and DOJ routinely operate under uncertainty about provider intent and actual clinical justification. They therefore rely on statistical outliers, utilization anomalies, diagnosis patterns, and documentation mismatches as enforcement triggers. Those tools are unavoidable, but Fryer’s argument warns that when regulated parties know that certain observable patterns trigger investigations, they may not stop bad conduct; they may instead reshape behavior to look safer. In health care this can mean underuse, reluctance to serve high-risk populations, excessive conservatism in documentation, or avoidance of borderline but appropriate cases. A hospital or physician group may pull back from innovative but legitimate practices because the downside risk of being misread by auditors is too high. That mirrors Fryer’s policing example: a regime meant to curb abuse can unintentionally suppress desirable effort.
For CMS, one concrete implication is that outcome-based penalties are dangerous when layered onto a poor classification system. If CMS cannot separate honest providers in hard clinical environments from aggressive providers in easy environments, then simple penalty rules may punish the wrong actors. Readmission penalties, coding audits, and utilization thresholds can all behave this way. A safety-net hospital with complex patients may look “worse” than a polished suburban system, just as a good cop in a hard environment can look worse than a bad cop in an easy one. Fryer’s message is not that outcomes should never matter. It is that outcomes work better after the regulator has first designed a system that encourages regulated entities to sort themselves honestly.
This may be especially useful in value-based care and alternative payment models. CMS often wants providers to assume risk, coordinate care, manage populations, and invest in prevention. But provider organizations vary enormously in capital, analytics, patient mix, and operational sophistication. A rigid model can scare off capable but cautious participants while attracting actors skilled mainly at coding or benchmark arbitrage. A menu-based design is already partly present in Medicare payment policy—different tracks, glide paths, upside-only vs two-sided risk—but Fryer’s essay suggests going further. CMS could more deliberately make model choice informative. Greater transparency, benchmark disclosure, and real-time data sharing could buy lighter administrative burden or better upside. Less transparency could remain an option, but at the price of stricter downside exposure. The key is to make the choice itself reveal something.
There is also a lesson for local coverage determinations and national coverage decisions. CMS and MACs often use detailed coverage rules to control use when evidence is incomplete. But the more intricate the rulebook becomes, the more it may reward parties who are best at coding to the rule rather than those best at delivering the right care. Health care is full of actors responding strategically to what is measurable. Fryer would likely say CMS should be wary of building elaborate architectures that assume it knows enough to specify good behavior in advance. In fast-moving areas like molecular diagnostics, digital pathology, and AI, CMS probably does not. A better path may be provisional and tiered: more openness and post-market evidence in exchange for earlier payment access, rather than a binary covered/not-covered structure with endless documentation accretions.
That said, Fryer’s framework has limits in health care. CMS is not only maximizing efficiency. It also has statutory duties involving equity, beneficiary protection, budget control, and administrative feasibility. Menus can be elegant in theory but difficult in public administration. Firms may still game the choices. Small providers may lack the resources to enter the “transparent” track even when they are high quality. Moreover, health care harms can be delayed, diffuse, and hard to attribute. “Strict liability” is much easier to describe in an op-ed than to operationalize in Medicare payment. CMS cannot simply say, “Choose opacity, and you pay for all downstream harm,” because the causal chain is often contested. So the translation from theory to policy has to be modest and practical.
Still, the op-ed is valuable because it sharpens a real critique of health regulation: CMS too often substitutes documentation for information and process compliance for truth. In the payor context, that can mean notice requirements that do not protect patients, audit systems that do not identify true risk, evidence rules that repel the best innovators, and enforcement systems that induce underuse or defensive administration. Fryer’s proposed cure is not laissez-faire. It is smarter regulation that assumes the regulator is partly blind and designs around that fact.
If I were distilling the article into a lesson for CMS, it would be this: in areas like AI, utilization management, diagnostics, and value-based payment, CMS should move away from asking, “What single rule should everyone follow?” and toward asking, “What set of choices would cause honest actors and risky actors to sort themselves apart?” That question is much closer to the real economics of regulation in health care.
###
###
###
Is this like the old story where you have 1 child divide the cake and the 2nd child be the first to choose his piece? One person writes the regulatory choices and the other takes his favorite.
##
Yes—the familiar cake-cutting rule works because one child divides and the other chooses, giving the cutter an incentive to split the cake fairly.
The regulator’s version is similar but more sophisticated: one party designs a menu of regulatory options, and firms choose the one that suits them. Because firms know their own risk and quality, their choice reveals that information. Done well, the structure makes honest self-selection the rational move.
###
To be clear, the administrator has to apply Rule Set 1 to some companies (their choice) and Rule Set 2 to other companies (their choice.) Transaction costs? Friction?
###
Yes—there is added friction and administrative cost, and that’s the real tradeoff. A menu system replaces one blunt rule with two (or more) tailored regimes, which means CMS (or any regulator) must track who chose what, apply different oversight pathways, and manage more complexity.
But the argument is that this front-end complexity can reduce back-end costs. Instead of spending enormous resources trying (and often failing) to detect hidden risk through audits, appeals, and enforcement, the system lets firms sort themselves, so oversight is better targeted. You audit fewer “A” firms intensively because they’ve opted into transparency, and you focus enforcement where risk is concentrated.
In health care terms, CMS already accepts this kind of friction in places—different payment models, coverage with evidence development, Medicare Advantage vs FFS—because uniform rules don’t work well across heterogeneous actors. Fryer’s point is that a bit more structured complexity up front may actually lower total system cost and reduce gaming, even if it feels less tidy than a single national rule.
###
How might this apply to MolDx? What kind of options could MolDx offer (in its domains of safety, effectiveness, cost, competiting methods, coverage, auditing)? Create 1 or 2 hypothetical applications of Fryer thinking. (I think each hypothetical scenario would have a menu of 2 policy options for a company to choose.) Take your time so you can think out good examples.
###
MOLDX VIA FRYER POLICY MENUS
Yes—MolDx is actually a very plausible place to imagine Fryer-style regulation, because MolDx already operates in a world of asymmetric information. The lab knows far more than the contractor about the real performance of a test, the weaknesses in the data package, the degree of physician uptake, and whether the assay is truly better than available alternatives. MolDx, for its part, already asks for a technical assessment, a test dossier, and evidence of analytical validity, clinical validity, and clinical utility before coverage, while also using DEX Z-codes to identify specific tests. In some LCDs, MolDx also makes clear that better-performing new methods can make older methods noncompliant with policy. (Palmetto GBA) That makes MolDx a natural setting for a “menu of options” approach, because it is already trying to sort heterogeneous companies without being able to see everything directly.
One Fryer-style hypothetical would be a “Transparency Track” versus an “Opacity-with-Risk Track” for a new oncology or infectious-disease molecular test. Under Option 1, the Transparency Track, the company would get the possibility of faster provisional coverage or broader contractor comfort if it agrees to submit not just the ordinary TA dossier, but also richer operating data: assay failure rates, repeat-test rates, ordering patterns, site-level utilization, periodic outcomes updates, and perhaps benchmarking against standard-of-care comparators. In return, MolDx would offer a lighter ex post burden—fewer routine audits, faster reconsideration cycles, and perhaps more flexibility in coding or claims processing. Under Option 2, the Opacity-with-Risk Track, the company could choose to disclose only the minimum standard dossier, protecting more proprietary information, but then accept a narrower initial covered population, more conservative utilization assumptions, and stronger post-payment audit or recoupment exposure if real-world use turns out to exceed the supported evidence. The idea is that a company with a strong, well-understood assay might rationally choose Option 1, while a company less confident in its operating profile might choose Option 2. The choice itself becomes informative.
A second hypothetical could focus less on raw transparency and more on comparative performance versus existing methods. MolDx often has to decide not simply whether a test “works,” but whether it adds enough value over current practice to be reasonable and necessary. (Centers for Medicare & Medicaid Services) So imagine a menu with Option A: Comparator Track and Option B: Restricted-Niche Track. Under Option A, the lab would agree up front to a stronger evidentiary burden: direct comparison to current covered methods, clearer evidence on how the result changes clinical management, and ongoing monitoring showing that use remains in the intended population. In exchange, MolDx could allow broader coverage language and more favorable positioning relative to existing alternatives. Under Option B, the company could avoid head-to-head comparative commitments, but then coverage would be limited to a narrower niche—perhaps only where standard methods fail, where tissue is unavailable, or where the patient population is unusually defined. That would let firms self-sort. A company that truly believes it is superior would be drawn toward the comparator track; a company that mainly wants a foothold without proving broad superiority would likely choose the restricted-niche track.
A third variation, especially relevant to cost and auditing, would be a “Stable Payment / Audit Light” track versus a “Higher Flexibility / Audit Heavy” track. Under the first, a company would accept disciplined alignment between marketed intended use, published evidence, and billing behavior, plus ongoing claims transparency tied to its Z-code. In return, MolDx would offer more predictable coverage administration and a lower chance of disruptive retrospective review. Under the second, a company could preserve more freedom around rapid iteration, software updates, or evolving lab processes, but it would accept intensified claims scrutiny, shorter reassessment cycles, and a greater chance that payment would be revisited if use patterns drift. This would be especially relevant where the test platform evolves faster than the published literature. The point is not to accuse the company of wrongdoing. It is to make the company choose between certainty with disclosure and flexibility with oversight.
The logic across all of these examples is the same. MolDx would not be trying to force companies to behave altruistically. It would be designing options so that a confident, disciplined, evidence-ready company finds it profitable to choose the more transparent path, while a less mature, more speculative, or more strategically ambiguous company chooses the path with narrower coverage or greater audit exposure. In Fryer’s terms, MolDx would stop pretending it can perfectly identify quality from the outside and instead create a structure in which the firm’s own choice helps reveal its type.
Of course, there is a tradeoff. MolDx would then have to administer two policy pathways instead of one, which means more transaction cost, more tracking, and more contractor discretion. But that added complexity might be worthwhile if it reduced the bigger problem: a single blunt framework that treats high-quality and low-quality entrants alike, produces lots of paperwork, and still leaves MolDx guessing which companies truly stand behind their evidence. In that sense, Fryer’s idea would not replace MolDx’s current emphasis on AV, CV, CU, and claims identification; it would sit on top of it, as a smarter way to sort companies that already meet the threshold for entering the conversation.