Good Practices for Data Integrity

Published on: 
Pharmaceutical Technology, Pharmaceutical Technology, July 2023, Volume 47, Issue 7
Pages: 38-39

Automation can be balanced with operator oversight.

Data and digitalization have rapidly evolved, along with software and meta-analysis, to become the lifeblood of the pharmaceutical industry. However, decades ago, I opened a biotech conference by stating that biology was becoming one of the computer industry’s fastest growing customers, evaluating huge storehouses of data. This article attempts to demystify the difference between knowledge and information bearing in mind the aphorism of Peter Mere Latham that, “fortunate indeed is the man who takes exactly the right measure of himself, and holds a just balance between what he can acquire, and what he can use” (1). In terms of optimizing a manufacturing process or procedure, fewer data can often mean more.

A great new challenge in pharmaceutical data governance is how to balance automation with sufficient operator oversight—ensuring that hiccups and missteps are properly captured and reported for compliance purposes. Paige Kane, a member of the Pharmaceutical Regulatory Science Team (PRST) at TU Dublin, suggested it all starts with “understanding of what our processes are, and to understand what data we are really trying to capture, to make sure that we have the right tool for that application, … to do that we need to take a risk-based approach to understand where the risks are and where the operators are intersecting with that. We don’t need to collect all the data [just] because we can, we need to understand what data are really required for the operation” (2).

Regulatory compliance

Garry Wright, European Laboratory Compliance Specialist at Agilent Technologies, has stated that “every regulated company is going to be using multiple software platforms across their production quality control environments, and they are going to generate huge volumes of GxP [good manufacturing practice] data, so implementing data integrity controls across all those individual software systems is very time consuming. It requires specialist skills, and it requires dedicated support. But you can protect the integrity of that data by keeping your software platforms up to date using the latest security and technical controls that automate workflows to try to minimize human contact. Due to ageing software platforms, companies tend to implement hybrid solutions where they develop and wrap around workarounds, but it still leaves them with an element of data integrity risk. This is more susceptible to human error and deliberate falsification, and that’s really where a lot of the recent data integrity warning letters come from” (2).

Wright went on to enumerate some of these common issues that appear to have played a role in the high-profile example of Laronde (3). Wright empathized, “Data integrity is a key part of any regulatory inspection. I remember going through my first data integrity inspection in 2013, and it really revolutionized the way I look at the laboratory and the way I prepare for inspections… [problems are] deliberate use of generic logons to protect the identity of people who are falsifying data, scientists having additional privileges that they don’t need to perform their role, [and] performance screening analysis before an official analysis. One of the biggest ones though is manipulation and falsifying by changing integration or integration events to make failing batches pass. [Or] deleting or hiding official sample data to avoid stoppages with the need to document the failure, then perform an investigation, and then retest. The last but most important is audit trails. Some companies have deliberately chosen not to activate the audit functionality. Other companies give their users privileges so that they can manipulate their information within the audit trail so they can cover up cases of falsification” (2).

Wright concluded on a positive note. “So, these are some of the common problems that have been cited in recent warning letters, but they can all be avoided if you keep your software up to date, and if you use all those technical and security controls that are available which have been specifically designed into those software platforms to protect the data. So, if you use your software correctly, you can be data integrity compliant” (2).

While Wright might be optimistic, the human element will have the last word. Reporting from STAT and The Boston Globe made public a story that is surprisingly common but underacknowledged. In this case, the story made headlines partly because of the therapeutic promise surrounding the approach, but mostly because of the high market valuation involved.

Missing data

Advertisement

As reported, the company noticed difficulties reproducing results in their preclinical data in a pivotal anti-obesity therapy aimed at competing with Wegovy and Mounjaro, partly because of a lack of detailed record keeping. “It underpinned the enormous $440 million the biotech raised in a 2021 Series B funding round, according to four people with knowledge of its plans, as well as a slide from an internal slide deck, which valued the company at over $1 billion” (3).

A star researcher, Cifuentes-Rojas, was presenting data “that was perfect. Anybody who works in in vivo [animal research] knows that’s not possible,’ said Jane van Heteren, an ex-Laronde employee who worked on the same team as Cifuentes-Rojas… Cifuentes-Rojas’ chart showed the protein levels starting at zero—an impossibility, because unless the mice were seriously ill, they would naturally produce GLP-1 of their own, van Heteren said. In an attempt to pinpoint why, she sought out Cifuentes-Rojas’ raw data, first in the company’s electronic lab notebook system and then from Cifuentes-Rojas herself. But there was no data in the company’s digital records, and Cifuentes-Rojas declined to share any with her colleague.”

A bad apple and accountability

The emphasis here is that a sister company to the obviously successful company, Moderna, fell prey to the one bad apple syndrome. Reporting goes on to say that “Initially, the manufacturing staff came under scrutiny…. [as] one batch might be 60% pure eRNA, another might be 80%. That was initially cited as the reason why scientists were getting different preclinical results, but three former manufacturing team members said it was a red herring, because none of the data produced at Laronde showed a clear relationship between how much eRNA was in a batch and how well it performed inside of cells or mice” (3).

The journey of data integrity depends fundamentally on the integrity of the people involved. A software platform can be made rigorously regulatory compliant, as long as operators’ intentions and personal integrity are in synch with that goal.

References

1. Spivey, C. Gene Quantification, Introspection and RNA Meditations, Conference, February 2002, Cambridge Healthtech Institute.

2. Playter, G. Unpacking the Science Behind Data Integrity, Drug Digest Interview with Page Kane and Garry Wright. PharmTech.com. March 3, 2023.

3. DeAngelis, A. and Cross, R. The Inside Story of How Data Integrity Issues Roiled a Biotech Seen as ‘Moderna 2.0’. STAT and The Boston Globe. June 12, 2023.

Article details

Pharmaceutical Technology
Vol. 47, No. 7
July 2023
Page: 38-39

Citation

When referring to this article, please cite it as Spivey, C. Good Practices for Data Integrity. Pharmaceutical Technology 2023 47 (7).