News|Articles|October 29, 2025

Best Practices for AI Adoption in Pharmaceutical Research

Listen
0:00 / 0:00

Key Takeaways

  • AI and accelerated computing are transforming the bio/pharmaceutical sector by addressing organizational, technological, and cultural challenges.
  • Successful AI integration requires overcoming data silos, ensuring data quality, and maintaining regulatory compliance.
SHOW MORE

Eva-Maria Hempe, NVIDIA, says AI platforms must integrate R&D data, overcoming silos, with adoption demanding a centralized strategy and change management.

In an interview regarding the presentation “The State of AI in Next-Generation R&D” at CPHI Europe 2025, held Oct 28-30 in Frankfurt, Germany, Eva-Maria Hempe, head of Healthcare & Life Sciences, NVIDIA, discusses the pivotal role of artificial intelligence (AI) and accelerated computing in transforming the bio/pharmaceutical ecosystem; explains that the successful integration of AI within the bio/pharmaceutical sector requires addressing organizational, technological, and cultural challenges, according to an analysis of strategic imperatives for AI adoption; and highlights the strategic necessity of integrating AI while maintaining critical human oversight and addressing geopolitical concerns surrounding data sovereignty.

PharmTech: How are leading organizations overcoming the challenges posed by siloed and unstructured data to unlock the full potential of AI in R&D?

Eva-Maria Hempe: The management of siloed or proprietary data is a major competitive advantage in this field. Leading organizations are tackling this challenge by implementing advanced AI platforms designed to integrate, process, and validate data.

These platforms support the centralization and harmonization of disparate data sources without requiring everything to be repackaged. On top of these core platforms, specialized tools are offered, such as Parabricks for genome sequencing, MONAI for medical imaging, and BioNeMo for biological language models. Additionally, kits for agentic AI are employed. These tools accelerate data analysis, significantly reduce manual input, and enable researchers to gain rich insights from complex, unstructured datasets, which are an inherent feature and richness of this field.

What strategies are most effective in ensuring data quality, integrity, and regulatory compliance for AI-driven research?

Data quality and integrity are paramount, as the old proverb "garbage in, garbage out" holds true. Effective strategies center on enforcing compliance across the entire data lifecycle and establishing a robust data governance framework.

Organizations are rolling out concepts and frameworks like ALCOA (attributable, legible, contemporaneous, original, accurate). Best practices also include implementing the right processes for audits, training, and automated validation routines. AI platforms assist with this by allowing the tracing of data provenance and ensuring safe model development practices, which serve as the foundation for broader behavioral aspects of compliance.

Where are the greatest opportunities for AI to enable more connected and adaptive R&D workflows?

The greatest opportunity lies in what is referred to as the "lab in the loop" or bridging the wet lab and the dry lab. This initiative presents AI challenges on both sides.

In the wet lab, the focus is on automation to ensure data quality. This involves capturing all process data in a harmonized form, including the necessary high-quality metadata, and establishing repeatable workflows, such as automating the transfer of plates between different machines. This high-quality, automated wet lab data then provides the basis for better predictions regarding which experiments should be run.

By closing the loop—wet lab validation providing good quality data which feeds models to predict the next experiments—the R&D process can be significantly accelerated. This systematic, data-driven approach is essential because the chemical space is immense—larger than the number of sand grains in the universe—and researchers previously only explored areas they were comfortable with.

Another major opportunity involves foundation models, such as Evo 2. These models allow researchers to dive deeply into genetic mutations at scale, predict their functional impact, enable better candidate selection, and ultimately drive cross-disciplinary collaboration.

What key technical or organizational factors are holding back wider AI adoption among biopharma companies?

Adoption is primarily hampered by organizational issues, in addition to data silos. Key barriers include:

• Organizational silos

• Legacy IT systems

• Fragmented AI strategies

• Underestimated change management

• GPU scarcity (a lack of sufficient compute resources)

Fragmented AI strategies contribute to GPU scarcity, as organizations lack a central overview of the infrastructure needed to run their most important AI projects. To address these barriers, professionals are advised to:

1. Establish a centralized AI strategy: AI must be positioned as a core business function that permeates the entire value chain, rather than just an add-on. Use cases are relevant across research, development, manufacturing, and commercial activities.

2. Balance projects: Organizations should balance ambitious long-term goals (like making drug discovery an intentional design rather than a serendipitous one) with projects that yield "quick wins". Quick wins, while perhaps less "sexy," provide immediate impact and help sustain momentum for more complex initiatives. Examples of quick wins include automated filing for regulatory processes, clinical writing, and improved recruitment for clinical trials.

3. Drive change management: This involves educating both staff and leadership. It is crucial to understand that AI is simultaneously incredibly powerful and incredibly non-powerful, meaning users must be able to judge the outputs effectively.

4. Fix processes before automating: If broken processes are digitized, standardized, or automated, the result is simply broken automated processes. Process improvements are a prerequisite for fully leveraging AI.

Addressing technical challenges like data interoperability, standardized ontologies, and compliance is also critical, and frameworks are emerging to help in these areas.

How do you envision the balance between AI-driven automation and human oversight evolving, especially for complex or customized experimentation in biologics development?

The future is seen as highly collaborative, where AI-driven automation handles routine tasks while human expertise remains essential. There will always be a human or the experiment in the loop, providing validation and feedback on the right direction.

AI-driven automation is ideal for:

• Routine parts of the process.

• Data mining and crawling through vast amounts of data.

• Early candidate selection and literature review.

• Finding signals across more papers than a human could manage.

However, human oversight and expertise will remain crucial for:

• Interpreting complex results.

• Coming up with experimental strategies.

• Ensuring ethical standards.

• Making bold bets or applying creativity.

Crucially, humans must be aware of AI's limitations, particularly in areas with scarce data. Since the goal is often to explore new biological concepts, these areas are, by definition, where less data exists, meaning AI cannot be relied upon in the same way as in well-researched domains.

What else should bio/pharma industry professionals consider about the adoption of AI?

Professionals should consider the element of sovereignty. Due to large geopolitical developments, organizations are increasingly concerned about interdependency and data control. For example, in Europe, efforts are underway to help local pharmaceutical companies build sovereign AI models. This allows them to secure sensitive data without compromising their ability to perform large-scale biological modeling.

Europe has the opportunity to lead in trusted, ethical, and sovereign AI for healthcare and life sciences, relying on strong regulatory frameworks and growing AI infrastructure. Although the share of AI compute in Europe is currently low (around 4%), this is changing due to various public and public-private initiatives aimed at creating AI factories and AI data factories to convert unstructured information into actionable insights at scale.

What should bio/pharma industry professionals know about AI agents?

AI agents are a current buzzword defined as a reasoning model equipped with memory and tools. They are useful for tackling multi-step tasks.

For example, an AI agent given access to a literature database (like PubMed) can execute a sequence of actions:

1. Turn a given question into a literature search.

2. Analyze the results of the literature search.

3. Turn the findings into hypotheses.

4. Potentially link to a virtual screening workflow to create and explore variants of standard-of-care molecules after identifying the target protein.

5. Sum up all findings into a comprehensive report for the researcher.

In the future, the vision is that AI agents could potentially link to an automated lab, running experiments autonomously, perhaps with a human performing a final oversight step to prevent the ordering of overly expensive reagents or compounds.

Newsletter

Get the essential updates shaping the future of pharma manufacturing and compliance—subscribe today to Pharmaceutical Technology and never miss a breakthrough.