News

Article

PDA 2025: Strategic AI Adoption in Bio/Pharma Manufacturing

Author(s):

Key Takeaways

  • AI integration in biopharma requires strategic foundations, including continuous learning, robust governance, and risk management to maintain GxP compliance.
  • Comprehensive training and clear SOPs are crucial for qualifying and validating AI systems, ensuring staff can articulate AI processes during regulatory reviews.
SHOW MORE

AI can offer a strategic blueprint for GxP compliance, risk mitigation, and human-led operational excellence.

Hand touching Telecommunication network and wireless mobile internet technology with 5G LTE data connection of global business, fintech, blockchain. | Image Credit: © greenbutterfly - © greenbutterfly - stock.adobe.com

greenbutterfly - stock.adobe.com

Editor's note: this story was originally published on BioPharmInternational.com.

With the bio/pharmaceutical industry undergoing a significant digital shift, artificial intelligence (AI) is playing an increasingly vital role in achieving operational excellence and bolstering quality maturity. A strategic and well-governed approach to AI adoption is not merely an advantage but a necessity for enhancing efficiency while maintaining rigorous good practices (GxP) compliance, said Vinny Browning, executive director of Quality Assurance, Amgen, who spoke at the Parenteral Drug Association (PDA) Regulatory Conference 2025, occurring Sept. 8–10 in Washington, DC.

In his talk, “Digitization (Artificial Intelligence) for In-House Operational Excellence,” Browning outlined a framework for integrating AI into bio/pharmaceutical manufacturing operations, focusing on strategic foundations, quality management system (QMS) integration, robust risk management, and practical applications (1).

What strategic foundations are necessary for AI adoption in regulated manufacturing?

A fundamental aspect of AI integration is a commitment to continuous learning and adaptation, treating evolving AI guidance like any critical company update, Browning stated in his talk. Companies must critically assess each AI application, distinguishing whether it serves as a job aid—assisting human tasks—or operates autonomously. This distinction profoundly influences the approach to system qualification and validation.

A manufacturer’s strategy for qualifying and validating AI systems must be meticulously documented within standard operating procedures (SOPs) to establish a clear, standard process, Browning emphasized. Crucially, comprehensive training is essential, not just for end-users to grasp basic functionalities, but also for staff who may need to articulate the company’s specific AI processes to auditors or inspectors during regulatory reviews. Browning noted that practicing these explanations will help ensure clarity and confidence in the industry’s digital endeavors.

Effective integration also demands consideration of AI's architectural fit and robust governance within a company’s QMS. Data integrity is paramount, and inconsistencies in data naming or definitions across systems, such as “time” versus “TM”, can undermine AI effectiveness, explained Browning. A clear gating process is necessary, acknowledging that an AI activity initially deemed non-good manufacturing practice (GMP) adherent might later become GMP-relevant, thus requiring specific controls based on identified risks. Continuous monitoring and assessment of AI tools are also essential, he added, with adjustments made if risks or guidelines change, mirroring the approach to computer system software assurance. The broader bio/pharma landscape already shows AI's promise, with initiatives exploring its use in gene writing, CAR-T preclinical development, vaccine research, and circular RNA therapeutics, highlighting its transformative potential across the industry (2,3).

What risk management and practical applications can be applied to ensure manufacturing quality?

Any new AI idea or system intake must undergo a structured risk management framework. This includes defining the idea, conducting a thorough system-level risk assessment that covers security (access control, third-party involvement) and architectural setup (on-site, cloud, remote), and selecting appropriate controls to mediate identified risks. These decisions should be documented in a risk registry. Continuous monitoring of the deployed tool is critical; any change in scope or emergence of new risks necessitates re-evaluation and re-mediation. The ultimate goal is to ensure AI systems are safe, secure, resilient, expandable, integratable, privacy-enhanced, fair (with managed bias), accountable, and transparent, Browning said.

In addition, AI offers significant potential to pull insights and facilitate faster decision making by aggregating data from multiple systems, which drastically reduces the manual effort traditionally required. While specific examples include regulatory surveillance and supplier monitoring, the principle of automating analysis and identifying gaps can be directly applied to manufacturing operations. For instance, Browning explained, AI could streamline annual product reviews by compiling data on deviations, complaints, and change controls, quickly identifying critical issues. Similarly, for periodic reporting, AI can pull information from various systems to create first drafts of reports, such as those related to supplier performance.

These applications create considerable efficiencies, saving numerous hours of manual work, Browning pointed out. He noted, however, that it is imperative that all AI-generated information be human-verified. AI can serve as a powerful tool to support individuals, but it is not meant to replace the essential human oversight for verification and compliance. This "human in the loop" rigor is critical to remain compliant and ensure quality, Browning said.

Browning concluded that there are three core pillars on which the strategic integration of AI into bio/pharma manufacturing quality hinges: training, governance, and oversight. He stressed that investing in staff training, establishing robust governance for data integrity and process assessment, and maintaining stringent human quality oversight are non-negotiable.

As the industry embraces digital transformation, Browning notes the prudency of piloting initiatives rather than attempting to implement AI all at once. Taking it slow allows for adaptability and the willingness to adjust strategies if initial trials are not successful. By meticulously following these principles, the industry can better harness AI to drive operational excellence, enhance quality, and ensure sustained compliance in drug manufacturing endeavors.

Click here for more conference coverage.

References

1. Browning, V. Digitization (Artificial Intelligence) for In-House Operational Excellence. Presentation at PDA Regulatory Conference 2025, Washington, DC., Sept 8–10, 2025. chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://80a7ba3d04f8b71aa576-301909dc4570c350a1649a6d39e3ef3b.ssl.cf1.rackcdn.com//3180120-1718426-001.pdf
2. Dixit, S.; Kumar, A.; Srinivasan, K.; et al. Advancing Genome Editing with Artificial Intelligence: Opportunities, Challenges, and Future Directions. Front. Bioeng. Biotechnol. 2024, 11, 1335901. DOI: 10.3389/fbioe.2023.1335901
3. Bäckel, N.; Hort, S.; Kis, T.; et al. Elaborating the Potential of Artificial Intelligence in Automated CAR-T Cell Manufacturing. Front. Mol. Med. 2023, 3. DOI: 10.3389/fmmed.2023.1250508

Newsletter

Get the essential updates shaping the future of pharma manufacturing and compliance—subscribe today to Pharmaceutical Technology and never miss a breakthrough.

Related Videos
Glenn Wright, President and CEO, PDA
PharmTech Weekly News Roundup
© 2025 MJH Life Sciences

All rights reserved.