
Realizing the Potential of Enterprise AI within the Global Pharmacovigilance Domain
Beena Wood, Qinecsa, saw 2025 as an AI superposition in which pharmacovigilence needed better data foundations and regulation.
PharmTech recently spoke with Beena Wood, chief product officer, Qinecsa Solutions, to get her perspective on trends that shaped pharmaceutical development and manufacturing in 2025 and where things are headed in 2026. In this part 1 of our three-part interview, Wood explores the complex landscape of pharmacovigilance (PV) in 2025, a year she characterizes as the "AI Superposition." Wood explains that the industry is currently navigating all stages of the
The discussion highlights a sharp divide between aspirational goals and implementation reality. While some organizations have achieved concrete successes, many in the PV domain are facing a significant "reality check." Wood points out that even leading pharmaceutical companies often lack the foundational data infrastructure necessary to support sophisticated AI systems. She emphasizes the high stakes involved in these advancements, noting, "very impactful use cases would be faster signal detection, earlier signal detection, improving the safety profile, and earlier patient protection."
Wood also addresses the perceived "skills gap," suggesting it may be misdiagnosed; the real issue often lies in the lack of maturity and context awareness in enterprise AI compared with consumer AI, she says. Furthermore, Wood highlights the rapid acceleration of the regulatory landscape, noting that 2025 saw pivotal new frameworks from the FDA and the EU. Ultimately, she argues that successful AI adoption requires moving beyond hype to focus on the critical prerequisites—specifically data foundations—to bridge the gap between promise and practice.
Transcript
Editor's note: This transcript is a lightly edited rendering of the original audio/video content. It may contain errors, informal language, or omissions as spoken in the original recording.
I am Beena Wood, and currently I'm chief product officer at Qinecsa Solutions, the software that brings pharmacovigilance [PV] into end-to-end, different products that ties the PV landscape together.
2025 was a year, it was all about the promise of AI and the implementation reality, and the gap in between, which that became impossible to ignore in 2025. Essentially, we experienced every phase of the Gartner Hype curve all at once, simultaneously. And what I mean by that is there were inflated expectations because of breakthrough announcements, what AI can do, and the pace at which AI was moving.
There was, of course, for a long time, there's no conversation about AI, especially from my domain, from pharmacovigilance domain. And then we were talking about cloud and SaaS, even as recently as five years ago. And then we jumped into, oh, there's AI ML use cases and NLP, and those dominated. And then the conversation very quickly shifted into now agentic AI and the possibilities, and then the inflated expectations with those breakthrough announcements of technology, and then, simultaneously living the trove of disillusionment because of failed pilots—investing money into AI, and I'll get to some of those failures and why, from my perspective, but also that disillusionment was happening. And then, simultaneously, there was a plateau of productivity of that hype curve, if we remember that, and that is from two factors: organizations getting this right, but it's also because we heard of shadow AI economy, in the MIT report, where people are plowing ahead, whether the enterprise is ready or not. They are using for their personal productivity.
So, I think that this is just an interesting confluence. Unless we open the box of specific AI implementation within an organization, it's in superposition of either transformative success or an epic failure. And so it is like, what 2025 made us, forced us, to do is open these boxes and confront what is actually working, and I still think we are in the early stages of that. I still think we are working in superposition of all of it as well as the hype curves. All these are hype curves. But what did we achieve, really in 2025, I would say, is some of AI moved to concrete use cases, from concept to concrete use cases for those who got the prerequisites right.
So, for example, clinical trial acceleration. I had been following this acquisition of Deep 6 AI, partly because one of my friends worked there, by Tempus and where they had showed the use cases of the clinical trials actually reducing by 50% the time, using real world data as well as the AI capability. So that is very interesting use cases to consider. And then, the other area of application, I think where it has been successful in this lifecycle is drug discovery and repurposing. Sanofi, for example, working on drug repurposing on their unpursued asset, or the asset that they have shelved, for rare diseases and then reducing that in silico model, the timeframe, because of AI.
And that kind of opens up… those are not just aspirational, but things that have been coming to life faster because of AI. Now, if I focus on PV itself, my area, there's a reality check. Theoretically, AI can, for example, very impactful use cases would be faster signal detection, earlier signal detection, improving the safety profile, and earlier patient protection. That exists; that enablement, the possibility, exists. But frankly, what I have experienced is most organizations, including leading pharma companies, haven't yet built the foundational layers that are necessary to make it successful.
So, for example, data foundation is an easy one. I think everybody will talk about it too, is that data infrastructure really doesn't exist yet to make these AI systems, those aspirational systems, come to life. I think that's an important thing that I experienced in 2025.
Another trend I saw was that perceived skill gap. I feel that problem of skillset is maybe partially misdiagnosed. And what I think is really look at why some of these consumer AIs are working really well. I would think it's because they are conversational, context aware, they are learning, or we can continue to help them learn and admit uncertainty if they're prompted, if they're built correctly. In enterprise AI, I think that maturity is lacking, or there's pockets of maturity but other areas that are lacking. I think that's another thing that I saw is that perception of skillset gap to reframing the problem correctly; instead of saying employees need to learn AI, is a real lesson “The teams are showing us what good AI looks like, and is that what we should expect from enterprise AI?” And I would say one more thing is regulatory. I think that's really important. I think that the evolution of regulatory guidances have accelerated. This year alone, we saw FDA, for example, coming up with their pivotal guidance on AI, their framework on risk assessments, and the CIOMS working group with their paper that just came out, which is pretty comprehensive. For example, FDA, establishing the CCRI, the Center for Real World Evidence Innovation, the EU Darwins, etc.
So, the regulators, I think, are starting to accelerate, or started to accelerate, this year, and it'll be really interesting to see how these trends take us into the next year.
Newsletter
Get the essential updates shaping the future of pharma manufacturing and compliance—subscribe today to Pharmaceutical Technology and never miss a breakthrough.




