The Challenges for Regulators in the Digital Age

Published on: 
Pharmaceutical Technology, Pharmaceutical Technology-12-02-2019, Volume 43, Issue 12
Pages: 6–8

Regulators are facing huge challenges on how to deal with the digitalization transformation occurring in the healthcare and pharmaceutical sectors.

Editor’s Note: This article was published in Pharmaceutical Technology Europe’s December 2019 print issue.

Digitalization is spreading rapidly throughout Europe’s healthcare and pharmaceutical sectors. But the pace of the change is leaving regulators struggling to keep up. Vast amounts of data from the development and production of medicines and patient treatments and outcomes are being collected, processed, and analyzed for creating algorithms for application in artificial intelligence (AI) such as machine learning.

However, few pieces of legislation in Europe, even at the national level, specifically tackle the problems emerging from the digitalization transformation in healthcare, such as quality of data and lack of standards.

Regulatory issues with digitalization

Regulatory issues with digitalization are mainly focused on three, often overlapping, areas. First, there is AI itself, which is an umbrella term used to cover technologies enabling machines to perform tasks such as recognition of specific images and patterns in datasets.

Algorithms, the step-by-step instructions given to AI equipment to process data or conduct tests, pose regulatory challenges because they can be ‘trained’ to draw up their own instructions or make their own decisions. In pharmaceuticals and related European sectors, the use of AI, and other digitalization tools, is mainly confined to applications at the points of patient care. It has yet to penetrate deeply into medicine production and even drugs R&D. But this is expected to change over the next decade, mainly because of the growth in personalized medicine, which will lead to an overlap between drug manufacturing and point-of-care treatments in, for example, hospitals.

A survey in 2017 of executives in life sciences, including pharma, by Camelot AG, a German-based management consultancy, found digitalization was having zero impact on manufacturing and only five percent impact on drug development and clinical trials (1). However, by 2020 the executives expected the impact on manufacturing to increase to 21% and on R&D to 11%, while by the early 2030s it would have risen to 35% and 29%, respectively.

During the next decade, regulations of AI will be playing a much bigger role in European pharma than it is now, with the biggest factor being the use of regulations to gain the trust of patients in the new digital technologies.

A report on AI, by the London-based thinktank Future Advocacy in conjunction with Wellcome Trust, a charitable foundation, predicted in 2018 that the biggest influence on social, political, and ethical aspects of AI centred on consent, fairness, and rights (2). Matters on rights also need to be considered when developing regulatory oversights of AI, the report said. Aspects of care being increasingly delivered autonomously by AI raised questions such as “do people have a right to know how much AI is used in their care?” Or “do people have a right not to have AI involved in their care at all?”, the report asked.

The European Union’s General Data Protection Regulation (GDPR), introduced in 2018 to replace the 1995 Data Protection Directive (3), enables individuals to have more control over their personal data in all sectors, including healthcare, even when it is exported outside Europe. 

The legislation obliges controllers of personal data to take “appropriate technical and organizational measures” to implement the data protection principles. This and other provisions in GDPR have raised fears in the pharmaceutical industry, among others, that the regulation could stifle innovation.

One controversial protection in the legislation is the ‘right to explanation’, which entitles citizens to “fair and transparent processing” of their personal data as well as “meaningful information about the logic” used in automated decision-making systems (3). Lawyers and law academics believe, however, that because of the way the ‘right to explanation’ is written into the regulation it is not legally binding.

Tailoring regulations required?

In a report published in November 2019 on AI policy, Digital Europe, a Brussels-based trade association representing digital technology producers, recommended that AI regulation should be drawn up to meet specific needs (4). It suggests that policy makers should endeavour to use current regulatory and legislative frameworks as much as possible, with, if necessary, additional guidelines to make them more effective.

Advertisement

In the application of good manufacturing practice (GMP) in a digitalization age, regulators in Europe are relying on Annex 11 of the EU’s GMP guide, which covers computerized systems (5). At the core of the annex is the application of risk management throughout the lifecycle of the computerized system taking into account patient safety, data integrity, and product quality. “As part of a risk management system, decisions on the extent of validation and data integrity controls should be based on a justified and documented risk assessment of the computerized system,” the annex specified. Operators of plants using AI are as a result being considered to have the same capability to assess risks with AI as with other traditional computerized systems. 

The Geneva-based Pharmaceutical Inspection Co-operation Scheme (PIC/S), whose members are regulatory authorities in the GMP field and whose GMP guide is virtually identical to the EU’s, has conceded that Annex 11 has become ‘outdated’.  

 

Defining AI

One glaring gap in regulatory controls on AI is the lack of an internationally agreed definition of AI. A committee of the House of Lords of the UK parliament complained in a report on AI last year (2018) about participants in its hearing of evidence providing ‘dozens of different definitions’ (6). 

The Paris-based Organization for Economic Co-operation and Development (OECD), representing the world’s rich nations and which was responsible for drawing up an international version of the principles of good laboratory practice (GLP) in the 1990s, announced in February 2019 that it is working on a set of AI guidelines that would include an answer to the question, “What is an AI system?”

The OECD’s committee on digital economy policy has made recommendations identifying a number of principles on responsible AI stewardship. These include fairness, transparency, explainability, robustness, safety, and accountability. 

The Geneva-based International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) do provide standards related to AI. But in healthcare, these mainly apply to medical devices or the hardware component of drug-device combinations (DDC).

A joint big data taskforce of the European Medicines Agency (EMA) and the Heads of Medicines Agencies (HMA) complained in a summary report issued in February 2019 about the lack of standards, particularly on data quality (7). The taskforce had been split up into six subgroups covering genomics and other ‘omics’, such as proteomics, clinical trials, adverse drug reactions, and other aspects of healthcare data. “Almost without exception, each of the subgroups raised the need for standardization as a key prerequisite” to drive data digitalization forward, the report said. Data standardization is needed to “define and, where possible, improve data quality and progress to actions to promote data sharing, access, and enable robust big data processing and analysis.”

Many datasets in Europe, outside of those for clinical trials, are not standardized because they have evolved over many years when different technologies have been applied to them. Nor were the data generated to support regulatory decision‑making and hence the need to comply with strict quality guidelines.

Even with relatively new areas of science such as genomics on which data on almost 250 million genomes is available and relatively well structured, accessibility to the data is restricted because of lack of standardization, according to the report. Much of it is siloed by disease, institution, and country, generated with different methodologies, analyzed by non-standard software and often stored in incompatible file formats.

Regulators must have “the capability and capacity to analyze, interpret, and profit from the data generated,” the summary report said (7). “In this way, we will improve our decision making and enhance our (evidence-based) standards.”

One barrier could be a lack of specialized expertise among regulators in data science. This sort of knowledge is needed to enable “informed and critical assessment of regulatory applications in future” of innovative products and processes, according to the report (7).

The knowledge gaps may be so large that a system for linking up experts with regulators may be necessary to ensure that the capacity of the regulatory system to make appropriate assessments is maintained. Standardization of data will be a major challenge, which from the regulatory perspective will require prioritization of what issues need to be tackled first.

References

1. Camelot Management Consultants AG, “Digitalization: Blessing or Curse for Compliance?” (Mannheim, 2017).
2. Future Advocacy and Wellcome Trust, “Ethical, social and political challenges of artificial intelligence in health” (London, April 2018).
3. EU, “Regulation 2016/679 on the Protection of Natural Persons with Regard to the Processing of Personal Data and of the Free Movement of Such Data,” General Data Protection Regulation, (Brussels, 27 April 2016).
4. Digital Europe, “Recommendations on Artificial Intelligence Policy,” (Brussels, 13 November 2019).
5. EC, Rules Governing Medicinal Products in the European Union: Good Manufacturing Practice, Annex 11 Computerized Systems (Brussels, 2011).
6. House of Lords, “AI in the UK: Ready, Willing and Able?” Select Committee on Artificial Intelligence, Report on Sessions 2017–2019 (London, 16 April 2018).
7. HMA and EMA, “HMA-EMA Joint Big Data Taskforce-Summary Report” (Brussels, 13 February 2019). 

Article Details

Pharmaceutical Technology Europe
Vol. 31, No. 12
December 2019
Pages: 6–8

Citation 

When referring to this article, please cite it as S. Milmo, “The Challenges for Regulators in the Digital Age,” Pharmaceutical Technology Europe 31 (12) 2019.