News

Article

What’s Reasonable to Expect from Agentic AI in Pharma? Part Three: Tapping Agentic AI’s Potential

Author(s):

Key Takeaways

  • Agentic AI's autonomous reasoning and decision-making capabilities are well-suited for data-rich industries like life sciences, offering transformative potential in pharmaceutical R&D.
  • Successful deployment requires a strategic, holistic approach, avoiding fragmented AI use cases and fostering cross-discipline collaboration.
SHOW MORE

In the conclusion of this three-part series, the author explores the potential use of agentic AI in pharmaceutical R&D.

Ai Agent, intelligence artificial development concept for Automation System. Developer uses a laptop for Ai Training and machine learning dashboard. Technology smart robot and agentic workflows | Image Credit: © Deemerwha studio - stock.adobe.com

Ai Agent, intelligence artificial development concept for Automation System. Developer uses a laptop for Ai Training and machine learning dashboard. Technology smart robot and agentic workflows | Image Credit: © Deemerwha studio - stock.adobe.com

Agentic artificial intelligence (AI)—the autonomous coordination of goal-driven AI “agents”—is arguably the most significant shift in AI since the advent of ChatGPT because of the potential to redefine the way that organizations operate. Its autonomy lies in the ability of AI agents and their orchestrators (or “super-agents”) to reason, synthesize knowledge, and adaptively determine and coordinate tasks and interactions needed to achieve a goal.

The ability to reason, anticipate, generate insight and knowledge, and make better decisions is perfectly matched to life sciences, an industry that is data-rich, process-heavy, and outcome-critical. As the pharmaceutical industry’s fascination with agentic AI grows, this three-part series (access parts one and two) explores the technology’s potential in pharma R&D, ending with this examination of the critical success factors for maximizing agentic AI’s benefits.

Critical success factors

Any intention to deploy agentic AI assumes that the organization has a strategic rather than tactical vision for AI; one capitalizing on the technology’s cumulative benefits across more than one use case. This in turn demands a more embedded and systematic approach to deploying the technology.

The whole point of an agentic AI system is to deliver an end goal in the best way possible, empowered to choose how best to do that—drawing on and extrapolating from everything available to it. Agentic AI proffers the benefits of autonomous reasoning and decision making, as well as continuous adaptation, in reaching defined goals. The total benefits should multiply as respective agents continue to hone what they do, based on their own deductions or new insights.

It follows, then, that the technology will need an overarching plan and appropriate mechanisms to elicit optimal, trusted benefits as the joins between workflows blur and agents creatively collaborate to deliver optimally on their goals. But what might such provisions look like?

Holistic thinking

Part two warned against a “scattergun” approach to AI deployment—the need to think beyond contained, niche use cases for AI in order to deliver something positively disruptive on a broader scale. Persisting with a fragmented, case-by-case deployment of AI-powered tools will limit the available benefits. Forrester talks about the risk of trapping AI agents in “walled gardens” (1).

It is only by breaking down those walls that companies will be able to deliver step-changes in the work they do, and transform the impact of that work, as the result of combined or newly surfaced knowledge built, for instance, from previously inaccessible insights.

Applying agentic AI for broader impact will require cross-discipline conversations between relevant function leads. It will also require the input of true AI practitioners with relevant domain expertise to help develop the vision and devise a plan.

Freedom vs. control: Striking a balance

In devising the wider agenda for agentic AI, there will also need to be a thorough exploration of current and future compliance considerations to ensure that the right provisions are factored in from the beginning, spanning ethical standards and evolving regulatory obligations. Since agentic AI harnesses advanced reasoning to decide how best to fulfill goals, having guardrails to mitigate the risk of rogue behavior will be critical. This is sometimes referred to as “bounded autonomy.”

Consider the amplified risk with specialized AI agents, each with their own respective permissions around what they can do and what they can access. This heightens the need for robust data governance and privacy controls, particularly as boundaries blur and as agents collaborate seamlessly and adaptively (e.g., in ways that may be hard to predict).

There is a further challenge if companies move too quickly to pin down governance, however. To leave room for future use cases, they must avoid being too prescriptive and limiting. To provide for responsible orchestration of AI agents, companies will need not only a fit-for-purpose multi-agent framework (the vehicle for coordinating multiple autonomous AI agents to achieve a common goal), but also the means to ensure this happens in a compliant, transparent, and trusted way.

A plethora of multi-agent frameworks already exist to support the creation of AI systems composed of multiple autonomous agents that collaborate to solve complex tasks. Examples include open-source frameworks like AutoGPT and LangChain’s multi-agent orchestration, which coordinate multiple AI agents to work together on complex tasks. However, these frameworks focus on task decomposition and coordination; they don’t inherently manage trust, context-sensitive decision making, or risk-aware governance. Those provisions need to be designed in from the start and must adapt as the technology evolves, new use cases emerge, and regulations change.

At the same time, it is also important that the pursuit of static, rigid compliance does not inhibit agentic AI’s potential. This is where design considerations become crucial and where governance needs to be a facilitator as well as a controller or mitigator of risk.

Human involvement will still be very much part of an agentic AI scenario, but now, given the adaptive nature of a coordinated, autonomous, multi-agent AI ecosystem, humans and AI must both be active participants in a workflow (with the human retaining ultimate control).

Until now, when AI has been applied to specified use cases (with an emphasis on automating a defined process), active human decision making and intervention has happened at designated points (human-in-the-loop decision making). In a more expansive, autonomous, and adaptive AI environment, the emphasis becomes more one of overall supervision (human on the loop quality control). Here, only when certain conditions arise does a human expert enter the picture. This gives the AI agents sufficient freedom to find better ways of executing workloads to fulfill their goals, but without the risk of them over-reaching if there is a complex scenario not previously encountered. Determining what will be appropriate is a matter for each industry and each organization.

Guiding principles

There is a lot to get right. If accounting for all of these parameters creates too much complexity, companies risk undermining any economic benefits. The ultimate aspiration would be for organizations to build or deploy at least 80% of the core capability globally, which is compliant for all applications and in all geographies, both now and in the future (e.g., as technology and regulations continue to evolve). This is likely to involve taking a “principles”-based approach, rather than one that is tightly coupled to specifics.

This approach philosophy is currently being developed by a Council for International Organizations of Medical Sciences (CIOMS) working group, in the context of AI in pharmacovigilance (2). Its comprehensive draft report, recently submitted for industry consultation, adopts an ethics-based risk management stance, designed to create a common foundation for regulators, industry, and technology providers that can keep up with the unprecedented pace of technological advancement underway in AI. The CIOMS report signals that the age of AI-driven pharmacovigilance has arrived.

The perspective being advocated expands from a risk management approach to cover human oversight, validity and robustness, transparency, data privacy, fairness and equity, governance and accountability, and future considerations. This is not about starting with governance and trying to pin this down as something static to be documented and left on a shelf. Rather, it encourages pharma companies to work up scenarios and goals that agentic AI can help solve and then apply systemic thinking and service design principles. Ideally, this would start with developing journey maps—plotting out an interconnected system of who is triggering what, when, and why.

An “AI-first” perspective must also encompass considerations, such as the degrees of freedom that agents should be afforded in their pursuit of their respective goals. A human-centered design approach could prove invaluable here, returning the emphasis to what it will take for teams to be able to trust an agent’s autonomous pursuit of a goal, for instance. Once fully understood, those considerations could be “baked into” the journey design’s provision for human involvement.

As pharma companies look more deeply into agentic AI, they will need to work with their technology providers or advisors to ensure that all these dimensions are addressed appropriately. Under those conditions, the prospect of unlocking tangible return on investment from multi-agent AI systems looks promising.

References

1. Joseph, L. and Curran, R. Interoperability Is Key to Unlocking Agentic AI’s Future. Forrester.com, March 25, 2025 (accessed Sept. 25, 2025).
2. CIOMS Working Group, Artificial Intelligence in Pharmacovigilance; Draft; CIOMS, May 2025.

About the author

Jason Bryant is vice-president, Product Management for AI & Data at ArisGlobal.

Newsletter

Get the essential updates shaping the future of pharma manufacturing and compliance—subscribe today to Pharmaceutical Technology and never miss a breakthrough.

Related Videos
Behind the Headlines Episode 26
© 2025 MJH Life Sciences

All rights reserved.