Thursday, February 12, 2026

Podcast Scan: Owkin's Podcast with Jorge Reis-Filho

Owkin is a French company on the cutting edge of genomic/cellular/biomarker/AI discovery. They have 63 videos on YouTube, many of them videopodcasts. 

In this one, February 2026, CEO Thomas Clozel talks with Jorge Reis-Filho, who is Chief in the enterprise AI unit of AstraZeneca.

#####

Chat GPT 5.2

The episode’s core idea is that time should be treated as a first-class biomedical variable: models should learn from trajectories, not just snapshots. The guest argues that integrating multimodal data (omics, spatial/pathology, clinical) through a semantic layer, plus temporal embeddings, enables more informative predictions—especially for oncology regimens where sequence and timing matter. Spatial biology’s promise remains under-realized mainly due to insufficient scale and diversity of datasets, motivating consortia.
The conversation stays practical: AI won’t bypass validation, near-term gains are likely in clinical development, and long-term breakthroughs require trials, culture change, and shared infrastructure.

#####

Owkin’s Podcast, Episode Notes:
When “Time” Becomes a First-Class Biomarker

The episode opens the way a lot of real conversations now start: a jet-lagged hello in San Francisco, an offhand confession about fasting, and a quick detour into a very American object lesson—Function Health and its borderline-comic volume of blood draw. It’s an oddly perfect cold open for what follows, because the subtext is modern biomedicine in miniature: lots of measurements, imperfectly interpreted, chasing something that feels like truth.

From there, the conversation snaps into focus around a theme that’s both technical and strangely intuitive: we’ve spent decades making “static” predictions in biology, and we may be approaching the point where that looks as antique as measuring a movie by a single still frame. The guest’s core claim is that the next leap won’t come from “more data” in the generic sense, but from integrating modalities through a semantic layer and embedding time as a native dimension—so the models can reason not just about what is, but where it’s going.

Directionality beats the “right number”

One of the most grounded moments is the wearables example. The guest doesn’t pretend Whoop (or VO₂ max estimates) are perfectly accurate; he basically says the opposite. The point is that in wellness and (eventually) medicine, directionality and time series can matter more than absolute values. If your wearable’s VO₂ max is “wrong,” but the trajectory is consistently improving or deteriorating, that trend can still be meaningful—especially once models treat time as more than a footnote.

That framing is useful because it politely calls out a familiar failure mode in biopharma analytics: we love single timepoint snapshots because they’re tidy and publishable, but living systems are rarely tidy. The episode argues—without hype—that temporal embeddings change what can be learned, because biology isn’t just state; it’s dynamics.

Multimodal + temporal + semantic: the three-legged stool

The guest comes back repeatedly to a three-part architecture:

(1) Multimodal integration (omics, pathology/spatial, clinical, outcomes, etc.)
(2) Temporal embeddings (timepoints, sequences, directionality)
(3) A semantic layer that makes the modalities interoperable so the models can “read” across them

It’s a serious point disguised as a conversational riff: without a semantic layer, you can assemble an impressive warehouse of data and still end up with models that behave like tourists—lots of photos, little comprehension.

What’s notable here is the ambition level. They’re not describing a modest dashboard upgrade; they’re describing a shift in what counts as an “input” to biological inference. The “semantic layer” idea is really about translation—turning incompatible datasets into something a reasoning system can navigate, query, and generalize from.

Static targets vs rational regimens

Where the conversation gets especially relevant for oncology strategy is the argument that future breakthroughs may be less about finding one perfect target and more about building combination regimens with rational sequencing.

The guest makes a practical observation: historically, the industry has been constrained by the need for large enough populations to justify development, and target selection has leaned on what was measurable or conveniently “expressed.” But if the future is combination therapy (and it already is, in many areas), then sequence becomes part of the biology. Two drugs need not be given simultaneously; the order and the timing may matter as much as the pairing. That’s where time series thinking stops being a modeling flourish and becomes a clinical development design principle.

Spatial biology: “technology of the year”… still waiting for its moment

A particularly interesting stretch is the guest’s take on spatial biology. He acknowledges the field’s status—celebrated, funded, lionized—and then lands a critique that will resonate with anyone who has watched a hot platform plateau: the unrealized potential is partly a scale problem.

Spatial data are granular; granular data require robust feature extraction; robust feature extraction increasingly points toward foundation-model-like approaches. But foundation models demand volume and diversity, and (right now) spatial datasets often don’t have it—at least not at the “one or two orders of magnitude more” level the guest argues is needed.

That’s where the episode’s collaborative instinct shows up. Instead of “my platform will win,” the guest leans into consortia and shared infrastructure—including mention of a multi-center effort (“MOSAIC”) that’s already trying to push scale. The thesis is blunt: if we want models that generalize, we need data that generalize.

The near-term promise: not science fiction, but clinical development plumbing

The conversation also avoids a common trap: pretending the biggest value is always the most glamorous. The guest draws a line between:

  • Near-term (2–3 years): measurable impact in clinical development, especially patient selection and multimodal integration

  • Longer-term: novel target discovery and deeper translation, because—even if you automate everything—you still have to run trials and validate hypotheses

That distinction matters. It treats AI less like a magic wand and more like an engineering discipline, where certain bottlenecks yield first and others remain stubbornly physical, regulatory, and time-bound.

“No poetic license to bypass validation”

One of the cleanest, most quote-worthy assertions in the episode is that using cutting-edge AI does not grant permission to skip validation. The guest cites biomarker guidance (he mentions ESMO) to underline a pragmatic hierarchy: explainability is desirable; independent validation is essential.

That’s a subtle but important stance. It signals to a scientifically literate audience that the speaker isn’t selling vibes. He’s describing a world where models may detect nonlinear patterns “above and beyond” classic causality hunting—but where translation still lives or dies on reproducibility, benchmarking, and external datasets.

Pharma’s strategic tension: quarterly gravity vs foundation-building

The interviewer pushes on a real executive dilemma: public-company time horizons reward visible pipeline value quickly, while genuine AI infrastructure and data ecosystems take longer. The guest’s answer is essentially: do both—but be honest about which parts pay back when.

What’s refreshing is that the conversation names a missing piece in many pharma AI narratives: not “AI will change everything,” but AI-enabled innovation that creates new biology and new pipeline value in the next 2–4 years—not just speedups of existing workflows. That’s a higher bar than “we automated literature review,” and it implicitly challenges leaders to demand examples that unlock something biologically non-obvious.

Who wins in an era of “democratized intelligence”?

The guest frames the current moment as an inflection point: intelligence used to be scarce; now it’s increasingly democratized. From that premise, he proposes a winner’s formula that feels almost annoyingly sensible:

  • Domain knowledge

  • Technical expertise

  • Infrastructure (compute, models/agents)

  • And, most importantly: data

He also adds the part that’s easiest to say and hardest to do: success requires solving people, process, and culture, not just technology. That line will land with anyone who has watched “AI transformation” fail because the org chart didn’t transform.

A physician’s urgency is not a slogan

Late in the episode, the tone shifts in a way I appreciated. The interviewer asks whether being a physician changes how the guest leads. The guest answers that the urgency is different: patients don’t care if the drug was “developed by AI”; they care that it arrives faster, works better, and combines efficacy without multiplying toxicity. He contrasts big-number biostatistics with the irreducible fact that each patient has a story.

It’s not sentimental, and it’s not performative. It’s a reminder that “outcomes” are not an abstraction—especially for clinicians who’ve had to look patients and families in the eye while medicine did what it could and then… didn’t.

Recruiting for the mission—and for “thinking through AI”

There’s also a practical leadership segment about hiring. The guest says, plainly, that their oncology ambition is to eliminate cancer as a cause of death (a huge statement, but presented as a recruiting filter rather than a marketing tagline). More interestingly, he distinguishes between people who “use AI” and people who can reimagine processes through AI—“thinking through AI, not with AI.”

That phrasing is a bit gnomic, but the intent is clear: AI isn’t a bolt-on tool; it’s a lens that changes what workflows should look like in the first place.

What he’s betting on

When pressed for “next breakthroughs,” the guest offers two bets:

  1. In oncology: data-driven rational regimens with markedly better response rates and durability—especially through better understanding of immune memory, fueled by spatial data plus temporal assessments (not just spatial snapshots).

  2. Outside oncology: lifestyle therapeutics (weight management and related agents) as genuinely transformational—impacting inflammation, disease prevalence curves, and what population health looks like by 2035.

The interviewer adds an important counterweight: the future may improve in some dimensions while worsening in others (environmental pressures, microplastics, shifting cancer epidemiology). The exchange avoids both techno-utopianism and doom; it’s more like: the prevalence table will look different—and we should plan accordingly.

The quiet takeaway

If I had to distill the episode into one unflashy, high-consequence point, it’s this: time is becoming a first-class citizen in biomedical AI, not an afterthought. Once you embed temporality—across modalities, across spatial context, across patient trajectories—you’re no longer asking “What is this?” but “What is this becoming?”

That’s a different kind of question. And it demands a different kind of data strategy, partnership strategy, and validation discipline than the industry has typically shown.

Owkin’s series is at its best when it holds two ideas at once: ambition about what’s possible and sobriety about what it takes—data scale, semantic interoperability, external validation, and the slow, stubborn reality check of clinical trials. This episode sits squarely in that lane.