20th Anniversary Special Feature: Validation and qualification

Published on: 

Pharmaceutical Technology Europe

Pharmaceutical Technology Europe, Pharmaceutical Technology Europe-12-01-2008, Volume 20, Issue 12

How has pharmaceutical manufacturing validation influenced analytical instrument qualification during the last 20 years and what are the emerging trends for the future?

FDA proposed the concept of validation to guarantee critical processes in producing a drug substance or drug product and, ultimately, safeguard patients. Validation is intended to ensure the quality of a system or process through a quality methodology for the design, manufacture and use of that system or process that cannot be guaranteed by simple testing alone.1

(ROSEMARY CALVERT/GETTY IMAGES)

Validation was derived from engineering practices for large pieces of equipment that would be tested following manufacture before being delivered against a contract,2 but its use soon spread to other areas of industry. This article examines how pharmaceutical manufacturing validation has influenced analytical instrument qualification during the last 20 years, and considers the emerging trends for the future.

Qualification perspectives

General guidelines regarding process validation for pharmaceutical manufacturing were first issued by FDA in May 1987. These guidelines introduced the terms 'installation qualification' and 'process performance qualification', and stated that equipment must be installed correctly and processes tested to provide "documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its pre-determined specifications and quality characteristics".3

With time, in addition to 'installation qualification (IQ), the process performance qualification became the more familiar terms 'operational qualification' (OQ) and performance qualification' (PQ). Although intended for the validation of pharmaceutical manufacturing processes, the IQ/OQ/PQ approach has also been applied to qualification of analytical instrumentation in quality control/assurance laboratories.

Unfortunately, FDA's original guidelines were open to misinterpretation, partly because of the language used in the documents. For analytical instrument qualification, this resulted in differences in IQ/OQ/PQ approaches between original equipment manufacturers, as well as differences in qualification policy within analytical laboratories and organizations. These differences are smaller for IQ, but can be more significant for OQ and PQ. As a significant proportion of IQ is almost 'generic', there is good agreement regarding what should be included in this stage of the qualification and who is responsible (e.g., checking the instrument against the order, confirming the laboratory environment's suitability, instrument installation, recording of configuration settings and diagnostic evidence/tests that demonstrate the instrument has been installed correctly). This is not the case for OQ and PQ where poor agreement regarding what these stages should contain is often found.

One such difference arose from the common practice of using three batches to validate a manufacturing process. Although FDA recognizes that validating a manufacturing process, or changing a process, cannot be demonstrated 100% by the completion of three successful full-scale batches, the agency acknowledges that the idea of prenominating and successfully testing three validation batches has become prevalent. Aspects of this philosophy spilled over to some analytical instrument qualification, mirroring the interpretation of FDA process validation — testing a parameter once at OQ stage and three times at PQ. There are many other examples of fundamental differences in qualification philosophy between organizations for what an OQ and PQ should contain. Therefore, there is considerable variation in the content and when/who should perform OQ and PQ. For laboratories that use a number of suppliers to perform analytical instrument qualification, this can result in a fragmented qualification rationale across the laboratory and conflicting approaches that the laboratory then must carefully defend in an audit.

GAMP

In the absence of a more definitive guide from regulators, the pharmaceutical industry looked to Good Automated Manufacturing Practice (GAMP) for a validation framework. Originally known as the Pharmaceutical Industry Computer Systems Validation Forum (PICSVF), GAMP was founded in 1991 in the UK and has published a series of good practice guides (GPG); the best known being the Good Automated Manufacturing Practice Guide for Validation of Automated Systems in Pharmaceutical Manufacture. GAMP 4 was widely applied and adopted within the pharmaceutical industry, with the last major revision (GAMP 5) released in January 2008.4 Historically, GAMP approached instrument qualification from a software-driven perspective that focused on documentation of the qualification evidence and, to some extent, moved away from more direct consideration of the outcomes or instrument application.

In GAMP 5, equipment is classified according to four categories (Category 2 from earlier versions has since been dropped):

Advertisement
  • Category 1 — operating system/infrastructure software.

  • Category 3 — non-configurable commercial off the shelf (COTS).

  • Category 4 — configurable COTS.

  • Category 5 — custom software.

The removal of Category 2 is inconsistent with the approach contained within analytical instrument qualification (AIQ). Instruments in USP <1058> Group B (see later) would have typically been classified a GAMP Category 2 in GAMP 4. GAMP 5 is more generic as a project-driven guide for qualification of computerized systems and includes greater 'scalability' for systems in different GAMP categories. The rigorous approach contained within GAMP 5 is increasingly beneficial as equipment and software become more complex and bespoke.

USP<1058>

In March 2003, the American Association of Pharmaceutical Scientists (AAPS) held a milestone conference with the aim of easing the growing qualification burden for analytical instruments. This was achieved by redefining the IQ, OQ and PQ terms, and presenting an easier-to-understand approach to qualification that widened the general acceptance of qualification for analytical instruments and presented a more simplified approach compared with GAMP 4. The resulting white paper was adopted by the US Pharmacopeia (USP) as a starting point for general chapter <1058> on AIQ (Figure 1).5

Figure 1: USP timeline.

USP<1058> came into effect in August 2008 and, as with GAMP 5, applies a risk-based approach through categorization.6 In USP <1058>, instruments are categorized into three groups. Typical examples are:

  • Group A — stirrer

  • Group B — pH meter

  • Group C — high performance liquid chromatography (HPLC) system.

Group A is for instruments with no measurement capability or need for calibration. Typically, Group B is for standard equipment with measurement values that require calibration where the user requirements are the same as the manufacturing specification. USP <1058> defines Group C as: "...instruments and computerized analytical systems, where user requirements for functionality, operational, and performance limits are specific for the analytical application."

Qualification of basic laboratory equipment is simplified as conformance for equipment in Group A is achieved through visual inspection. No matter how basic the equipment is, the group categorization is based on the complexity of the overall system and application; for example, if a stirrer performs its function, no further qualification is necessary, but if a stirrer forms part of a more complex system, such as a dissolution system, it cannot be classified as Group A.

Equipment performance must be understood in the context of its use. USP<1058> states: "Conformance of Group B instruments or equipment to user requirements is determined according to the standard operating procedures for the instrument or equipment, and documented during IQ and OQ." Again, the documentation process may have been simplified, but it will always remain good qualification rationale to consider how the instrument will be used. For instruments in Group C, full qualification is still required (IQ/OQ/PQ).

Both GAMP 5 and USP<1058> employ risk-based approaches and, increasingly, the general principles of risk-based thinking are being advocated by regulators, such as FDA, and developed through application of ICH guidelines (ICH Q9).7 Other common principles between GAMP 5 and USP <1058> include clarification and definition of roles and responsibilities, scalable approaches to qualification (depending on the categorization) and making maximum use of supplier-derived information. Differences between GAMP 5 and USP<1058> have been discussed elsewhere.8

The future

USP <1058> was specifically written for analytical instruments, while the generic project approach included in GAMP 5 also supports its general use for instrumentation used within PAT. FDA implemented PAT concepts to modernize pharmaceutical manufacturing. These concepts have evolved to the current level of use by applying a rationale, which was initially based on migration of laboratory-based technology (e.g., near infrared and other spectroscopy applications), into processes and production environments.9

Spectroscopy is well suited to modelling and monitoring processing operations and a number of successful PAT projects have been implemented using this technology. However, the strengths of the technology, which have enabled successful applications such as drying and blending, also mean that it is susceptible to changes in process operation that can impact sensitive chemometric calibration models.

In parallel with these well-established applications, simpler PAT tools need to be developed that have greater applicability, are more robust and less susceptible to small changes in process operation. These should be designed to work with specific manufacturing and process operations, such as reaction monitoring, distillation, crystallization, blending and drying.

For maximum uptake, these sensors and process monitoring tools would need to be adaptable for use with existing manufacturing processes. However, if the design of the process plant is incompatible (e.g., no access point in the vessel), it may restrict uptake. Additionally, the need for access to the equipment for maintenance and repair will depend on the robustness and complexity of the technology used, and how replaceable or modular it is. Remote diagnostic capability will be essential and this capability will need inclusion in the validation. However, many pharmaceutical companies have expressed concern regarding external access over their secure network.

Known unknowns

The philosophy of PAT and Quality by Design (QbD) will have the greatest impact on pharmaceutical processes currently under development where intrinsic knowledge of the processes as they are developed is generated using PAT, as advocated by ICH guidance.10 This has the potential to produce a paradigm shift in process registration. Historically, because the manufacturing processes have essentially been 'fixed' during registration, variation in the chemical and physical properties of the input materials have translated through to variation in the quality and properties of drug product. In the future, application of PAT and QbD principles will support registration of more flexible processes, where adjustments to the process are made to compensate for the variability of the input materials. In the near- to medium-term, as processes currently under development come online, it is anticipated that conventional testing approaches will continue to be used to develop product specifications in parallel with PAT applications. Processes will be validated using both approaches at the same time.

On the go...

One frequently asked question about PAT relates to process deviations. In an ideal world, process deviations will no longer occur as manufacturing variation is reduced and processes are monitored through PAT. Depending on the level of automation, an 'unplanned event' (possibly caused by human error) may occur. This potentially takes the process and subsequent affected batches out of the normal validated operating range of the process.

A key question is how this variation and, in particular, the batches affected should be dealt with. The same challenge applies to conventional testing — will the analytical specification tests measure any new impurities that might be generated because the process was operated outside its validated parameters?

With established processes, one approach used for unplanned process deviation is to screen the affected batches against 'normal' batches manufactured within the validated process parameters. A standalone experimental design is used where all samples are analysed on the same chromatographic run using generic technology, such as liquid chromatography-mass spectrometry (LC-MS) methodology. Ideally, a number of methods should be used to provide different selectivity. The principle is that any significant new impurities present in the deviation batches will be identified through the LC-MS analysis screening process, which is applied in addition to the normal specification tests. In the future, however, when processes are better controlled, it is possible that regulators will take a stronger position against the use of these out-of-specification batches, with respect to process validation, even if the material passes the product specification.

Longer term, as technology used with PAT becomes more innovative, new qualification approaches need to be integrated, which include point-of-use verification of performance (the PAT equivalent of 'system suitability' criteria used in HPLC, for example). In addition, forecasting ongoing maintenance requirements will become harder. Accelerated stress testing of PAT components and systems before implementation may not adequately predict the failure modes in 'real' use. Because of this, there is a possibility of greater failures. For systems implemented on new products, will the process be stopped for preventative maintenance of the PAT system or if the technology breaks down?

It is more likely that 'conventional' testing, developed in parallel with PAT, will be the fall back option. Therefore, until PAT technology evolves and greater expertise is gained in the development of processes and specifications using this approach, there is still a need for conventional specification testing using more traditional qualification requirements (AIQ or GAMP 5). Therefore, the capacity of the laboratory to 'revert' to conventional testing will remain important and the level of analytical automation will influence the level and flexibility of resources required to do this.

Global control

For existing manufacturing processes, potential restrictions of access points and process design may limit PAT applications. Additionally, global economic cost pressures may drive greater technology transfer as manufacture is moved to cheaper manufacturing locations. In the short term, it is anticipated that such transfer will be supported by conventional analytical technology transfer.

Figure 2: Harmonized qualification approach.

Irrespective of the validated analytical method robustness, fewer analytical problems occur when the same analytical equipment is used at each location and where the instrument operational range is 'bound' by the qualification range. This rationale provides the lowest likelihood of analytical problems during transfer. Extending this consistent qualification rationale across laboratories further minimizes compliance risks when the transfer process is audited. Therefore, the trends towards multivendor capability and harmonized qualification in established instrument markets will be increasingly seen in these evolving markets (Figure 2). This will go some way to address the fragmentation of qualification experienced by laboratories reliant upon several service providers or original equipment manufacturers for support of their analytical instrumentation.

Paul Smith is European Validation Program Manager at PerkinElmer Life and Analytical Sciences (UK).

References

1. R.D. McDowall, Qual. Assur. J., 9(3), 196–227 (2005).

2. A. Hoffman et al., Pharmaceutica Acta Helvetiae, 72(6), 317–325 (1998).

3. FDA Guidance for Industry — Guidelines on General Principals of Process Validation, May 1987. www.fda.gov

4. GAMP 5: A Risk-Based Approach to Compliant GxP Computerised Systems, (ISPE Publications, Florida, USA, January 2008).

5. S.K. Bansal et al., AAPS PharmSciTech, 5(1), Article 22 (2004).

6. US Pharmacopeia 31-NF26, First Supplement, General Chapter <1058>, 3587–3591 (2008).

7. ICH guidelines — Final Concept Paper: Q9: Quality Risk Management, November 2005. www.ich.org

8. P.Smith, BioProcess International, 5(9), 30–38 (2007).

9. FDA Draft Guidance PAT — A Framework for Innovative Pharmaceutical Manufacturing and Quality Assurance, August 2003. www.fda.gov

10. ICH Guidelines — Final Concept Paper: Q8: Pharmaceutical Development, September 2003. www.ich.org