
Full Autonomy Is a ‘No-Go Zone’: Setting Parameters for Agentic AI in Pharma
ArisGlobal’s Jason Bryant continues a video supplement to his series of articles on the adoption and implementation of AI agents.
In this second of a three-part series, an
Autonomy is not a binary option, Bryant said in the first part of this video interview with PharmTech®, and can work in concert with accountability even if it seems like the two characteristics are not analogous. In this second video, Bryant expands on that viewpoint and adds that autonomous agentic AI cannot and will not usurp all processes.
“I said it's not binary, and what I mean by that is that full autonomy is really a no-go zone for pharmacovigilance,” Bryant says. “And frankly, full autonomy is undesirable or it will even be prohibited where things matter, like choice or ethics or law. It is suitable in some domains. It's suitable in domains where outcomes are reversible, I would say that's the primary characteristic.”
Bryant also says these agents excel at discovery via inference as opposed to look-up.
“For us, that enhances safety teams, because you can now explore different possibilities, especially from weak signals,” he says. “But these weak signals might carry a world of insight. So we're moving now beyond that pattern matching, which, as I said before, still incredibly valuable, incredibly powerful, but we're really moving now into true insight generation.”
The second part of Bryant’s interview can be viewed above. Watch the first part
The three articles written by Bryant are available
Transcript
Editor's note: This transcript is a lightly edited rendering of the original audio/video content. It may contain errors, informal language, or omissions as spoken in the original recording.
I said it's not binary, and what I mean by that is that full autonomy is really a no-go zone for pharmacovigilance. And frankly, full autonomy is undesirable or it will even be prohibited where things matter, like choice or ethics or law. It is suitable in some domains. It's suitable in domains where outcomes are reversible, I would say that's the primary characteristic. And there are rich examples of that, like scientific discovery is an example of that where outcomes are reversible.
But the critical piece here is that there's coordination that is done by an orchestrator. The orchestrator has autonomy that coordinates the capabilities, that manages the context, decides when to invoke or escalate or stop entirely the transition control to a human. And that bounded autonomy ensures that, yes, you can anticipate some variability, but for good reason, because you're going to get more value from it, but it's controlled, so it makes a genetic behavior governable, and that's really important in drug safety. So autonomy and accountability. It does sound like a paradox, but for us, it can be resolved, and they can absolutely coexist.
It's happening quickly. These capabilities are not slowing down. We don't see a plateau forming at this point. And here we're really in the domain where machines are thinking, and they're not doing look-up anymore. You mentioned that they're reasoning, they're absolutely reasoning, but in reasoning, they're simulating, they're thinking ahead, they're exploring options, and they're generating pathways that I think are fundamentally very creative.
And these creative pathways are not encoded in instructions, and they're not explicit in the source material that's being discovered, and that's one of the critical pieces for discovery, and that it can't be so machines that think, I think that that might have sounded like science fiction quite a while ago, but here we have it in our everyday life, you can access the consumer tools and see long-chain reasoning. You can see this deeper reflection. You can see this exploration of scenarios and agents.
Ultimately, they test hypotheses as well. They can think through these pathways, and they can anticipate problems and weigh trade-offs. So it's very data-rich, but it's also a very creative process, and it's not just retrieval anymore. They're connecting sparse and cross-domain signals. This is what I mean by discovery, that you just can't get from a single source. It can't be encoded within it directly.
So that is really discovery through inference, not look-up, because it's extending what's possible from finite inputs, inference and not look-up. And for us, that enhances safety teams, because you can now explore different possibilities, especially from weak signals. But these weak signals might carry a world of insight. So we're moving now beyond that pattern matching, which, as I said before, still incredibly valuable, incredibly powerful, but we're really moving now into true insight generation.
Generation already excels at human-level writing. I mean, LLM stands for a large language model, that shouldn't be a surprise, that it's good with language. And so they're excellent there. We don't see an agentic lift in that sense. And as I mentioned, they're narrow, narrow-scope agents. Like our major coding agent, for example, is perfect for just delivering huge advances in really expert-level reasoning, and that's going to be highly valuable.
But when you start thinking about enterprise processes that involve multiple roles and multiple goals and multiple decisions and that interconnectivity, this is where the multi-agent systems with orchestration are really going to unlock that potential, and it's the orchestrator that will ensure this continuity across the agents. I'm sure we'll come on to that later, but it's going to deliver context. It's going to ensure it understands goals, the rationale for achieving those things, and it's going to stay low across process steps.
Newsletter
Get the essential updates shaping the future of pharma manufacturing and compliance—subscribe today to Pharmaceutical Technology and never miss a breakthrough.




