News

Article

What’s Reasonable to Expect from Agentic AI in Pharma? Part Two: Agentic AI’s Adoption in Life Sciences

Author(s):

Key Takeaways

  • Agentic AI enables autonomous, goal-driven coordination of AI agents, enhancing decision-making in data-rich industries like life sciences.
  • The technology allows AI agents to reason, adapt, and synthesize knowledge, improving outcomes in pharmaceuticals.
SHOW MORE

In this continuation of a three-part series, the author explores the potential use of agentic AI in pharmaceutical R&D.

Ai Agent, intelligence artificial development concept for Automation System. Developer uses a laptop for Ai Training and machine learning dashboard. Technology smart robot and agentic workflows | Image Credit: © Deemerwha studio - stock.adobe.com

Ai Agent, intelligence artificial development concept for Automation System. Developer uses a laptop for Ai Training and machine learning dashboard. Technology smart robot and agentic workflows | Image Credit: © Deemerwha studio - stock.adobe.com

Agentic artificial intelligence (AI)—the autonomous coordination of goal-driven AI “agents”—is arguably the most significant shift in AI since the advent of ChatGPT because of the potential to redefine the way that organizations operate. Its autonomy lies in the ability of AI agents and their orchestrators (or “super-agents”) to reason, synthesize knowledge, and adaptively determine and coordinate the tasks and interactions needed to achieve a goal.

The ability to reason, anticipate, generate insight and knowledge, and make better decisions is perfectly matched to life sciences, an industry that is data-rich, process-heavy, and outcome-critical. As the pharmaceutical industry’s fascination with agentic AI grows, this three-part series (access part one) explores the technology’s potential in pharma R&D, with this middle installment assessing both the big-picture opportunity and emerging early use cases.

The potential for agentic AI in life sciences

One of the misconceptions about agentic AI is that the collaborative potential of multiple specialist AI agents is equivalent to having a pool of “digital teammates.” This misconception misses the technology’s bigger-picture scope, which has everything to do with each agent’s autonomy in achieving its task in the best way possible, and its propensity to keep adapting and honing this skill.

It is important to note that, although autonomy implies adaptability, there is critical nuance here. An agentic AI system doesn’t just choose a plan of action and execute; it can change its mind and pivot both its reasoning and actions. That could be as a result of encountering new data or evaluating new outcomes, or it could just be through thinking differently within a chain of thought.

Crucially, in agentic AI, each contributing part of the overall solution is goal-driven, rather than pre-programmed to complete a particular task in a defined way. All of the combined reasoning (as each agent fulfils its goal in the most effective and efficient way) means that the actual execution may vary from one occasion to the next. Within even a short time, this adaptation should lead to tangibly improved performance and better outcomes.

Autonomy and adaptation in action

Google’s AI co-scientist has received a significant amount of attention in a medical and medicines-based context (1). This attention is due to the system’s ability to bring to life the higher potential of giving intelligent agents greater freedom—not only as a means to find new answers to big questions from vast pools of existing research, but also via the ability to deduce scope for and thus propose other research goals. This ability stems from the fact that underlying large language models are now performant in mathematics, meaning that advanced AI systems can extrapolate beyond existing sources.

Google’s multi-agent system harnesses AI’s ability to synthesize information and perform complex reasoning tasks to help scientists create novel hypotheses and research proposals using natural language. This capability promises to substantially accelerate and hone scientific discovery (2). Similar leaps are expected in chemical and biological reasoning. The system has wide potential in biomedical research and is being tested to identify new drug repurposing opportunities (e.g., in the context of human liver fibrosis) (3).

But the vision doesn’t have to be that ambitious to have an important impact. Agentic AI could also enable step changes in everyday processes that contribute to bringing drugs to market, and to monitoring and improving their safety over time. Where discrete AI-powered solutions have already highlighted the scope for greatly enhanced efficiency and accuracy in the context of product change control/regulatory impact assessment, and adverse event case processing in pharmacovigilance, agentic AI will enable a multiplication and amplification of those gains—as long as there is a strong multi-agent framework to weave together and orchestrate the respective agents in a coherent way for each goal-driven scenario (this orchestration will be discussed in more detail in the final article in this series).

Multiplying existing AI successes

It is also important to consider what has already been achieved with AI at a functional level in current pharma R&D processes, and how much further this could go.

One example is in a pharmacovigilance scenario, triggered by a safety signal that points to an unusual adverse event pattern. In an agentic AI context, a signal agent would detect and flag that anomaly—as happens already in AI-enabled systems. But now, that alert would immediately invoke the workflow, sparking next actions, such as ensuring that the signal is validated and processed in the right way, and in line with the requirements and timescales of the relevant authorities. A longitudinal agent, meanwhile, could accumulate, match, and derive signal patterns over time.

The agentic system might further extend its insights and actions, too. For instance, it could instantly extrapolate the additional work needed to generate documents, which would fulfill the reporting needs linked to the safety event. This early calculation could inform preemptive adjustments needed to free up capacity, based on where and when the spike in demand is predicted to occur.

Maximizing AI’s scope via agent interoperability and coordination

Rather than existing as a series of passive, standalone software tools awaiting individual instruction, the agentic, or multi-agent, AI system is a more dynamic setup in which interconnected agents are primed and ready to go as part of a wider ecosystem, of which they each comprehend they are a part. Each agent would make its own assessment once its respective goal has been assigned and autonomously deliver what is required of them, with appropriate leeway (freedom to reason) to determine the best way to do it. This reasoning should encompass any source referencing or cross-checking that the particular agent may need to perform, as well as collaboration with other agents where appropriate.

Ultimately, this is about elevating AI from the role of application-specific executor of individual tasks, or solver of discrete problems, to a proactive deducer of what needs to happen, and of how to address a wider scenario more holistically and more optimally. Ultimately, this supports data-driven decision making, the potential for advanced exploration of “what if” scenarios, evaluation of trade-offs and consequences, testing of interventions, modeling of downstream implications, distillation of recommendations and any implications, and triggered actions to close the loop.

Extending AI’s reach across safety and regulatory scenarios

In a safety and regulatory context, there is potential for lateral extension from the current use of a specialist AI tool to streamline Medical Dictionary for Regulatory Activities coding of adverse events. Already, AI has helped boost efficiency and accuracy around the classification of adverse event data, to conform with the internationally standardized terminology. But what if this AI-powered activity could become part of an extended workflow, coordinated by an AI orchestrator or overseeing “super-agent”? That step might be achieved by simply introducing additional reference cross-checks. Or it could mean delivering the broader, more seamless end-to-end pharmacovigilance scenario described earlier in this article, exploiting wider opportunities to streamline processes on an end-to-end basis, thereby expediting next actions.

Similarly, in a regulatory context, the next level of AI-enabled process enhancement should be to join the dots and multiply the gains along wider workflows. In this way, sizable challenges such as maintaining a product’s regulatory compliance around the world becomes more frictionless across all of the various sub-tasks. The broader scenario that agentic AI could help alleviate might extend from product change control alerts and regulatory impact assessment to processing data and document updates, and completing timely health authority notifications, all while optimizing international resource planning across that process.

It is with such aspirations in mind that pharma R&D function leads must collectively apply creative vision as they expand their ambitions for AI.

From discrete problem-solving to end-to-end process reinvention

The challenge now is for pharma organizations to think beyond contained, niche use cases to consider how use of a coordinated, autonomous, multi-agent AI ecosystem could deliver something positively disruptive at scale.

The aim should be to avoid a “scattergun” approach to AI deployment, in favor of something much more strategic and “transformational” in the true sense. The risk in the scattergun scenario is that macros (sequences of action) for each respective AI use case are developed in isolation, remaining disconnected and restricting the opportunity for additive benefits.

Delivering the fuller benefits of agentic AI will require vision and strategic intent. From a compliance and trust perspective, it will demand comprehensive consideration of ethical standards, and of regulatory obligations. But this is not just about adapting AI’s safeguards. Design choices will also dictate the extent to which the technology’s use can be scaled, unlocking its fuller potential. When it comes to human oversight, for instance, agentic AI’s autonomy and adaptability will require that this moves from a fixed-point review toward a more dynamic approach (e.g., trigger-based intervention).

The final article in this series will consider the prerequisites for optimized agentic AI use across extended late-stage pharma R&D workflows. It will also look more closely at multi-agent frameworks and how to reframe governance in the light of agentic developments.

References

1. Gottweis, J.; Natarajan, V. Accelerating Scientific Breakthroughs with AI Co-Scientist. Research.google. Feb. 19, 2025 (accessed Sept. 25, 2025).
2. Matias, Y. How We're Using AI to Drive Scientific Research with Greater Real-World Benefit. Blog.google. May 8, 2025 (accessed Sept. 25, 2025).
3. Guan, Y.; Inchai, J.; Fang, Z.; et al. AI-Assisted Drug Re-Purposing for Human Liver Fibrosis. bioRxiv 2025, preprint. DOI: 10.1101/2025.04.29.651320

About the author

Jason Bryant is vice-president, Product Management for AI & Data, at ArisGlobal.

Newsletter

Get the essential updates shaping the future of pharma manufacturing and compliance—subscribe today to Pharmaceutical Technology and never miss a breakthrough.

Related Videos
© 2025 MJH Life Sciences

All rights reserved.