20th Anniversary Special Feature: Validation and qualification

Dec 01, 2008
Volume 20, Issue 12

FDA proposed the concept of validation to guarantee critical processes in producing a drug substance or drug product and, ultimately, safeguard patients. Validation is intended to ensure the quality of a system or process through a quality methodology for the design, manufacture and use of that system or process that cannot be guaranteed by simple testing alone.1


Validation was derived from engineering practices for large pieces of equipment that would be tested following manufacture before being delivered against a contract,2 but its use soon spread to other areas of industry. This article examines how pharmaceutical manufacturing validation has influenced analytical instrument qualification during the last 20 years, and considers the emerging trends for the future.

Qualification perspectives

General guidelines regarding process validation for pharmaceutical manufacturing were first issued by FDA in May 1987. These guidelines introduced the terms 'installation qualification' and 'process performance qualification', and stated that equipment must be installed correctly and processes tested to provide "documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its pre-determined specifications and quality characteristics".3

With time, in addition to 'installation qualification (IQ), the process performance qualification became the more familiar terms 'operational qualification' (OQ) and performance qualification' (PQ). Although intended for the validation of pharmaceutical manufacturing processes, the IQ/OQ/PQ approach has also been applied to qualification of analytical instrumentation in quality control/assurance laboratories.

Unfortunately, FDA's original guidelines were open to misinterpretation, partly because of the language used in the documents. For analytical instrument qualification, this resulted in differences in IQ/OQ/PQ approaches between original equipment manufacturers, as well as differences in qualification policy within analytical laboratories and organizations. These differences are smaller for IQ, but can be more significant for OQ and PQ. As a significant proportion of IQ is almost 'generic', there is good agreement regarding what should be included in this stage of the qualification and who is responsible (e.g., checking the instrument against the order, confirming the laboratory environment's suitability, instrument installation, recording of configuration settings and diagnostic evidence/tests that demonstrate the instrument has been installed correctly). This is not the case for OQ and PQ where poor agreement regarding what these stages should contain is often found.

One such difference arose from the common practice of using three batches to validate a manufacturing process. Although FDA recognizes that validating a manufacturing process, or changing a process, cannot be demonstrated 100% by the completion of three successful full-scale batches, the agency acknowledges that the idea of prenominating and successfully testing three validation batches has become prevalent. Aspects of this philosophy spilled over to some analytical instrument qualification, mirroring the interpretation of FDA process validation — testing a parameter once at OQ stage and three times at PQ. There are many other examples of fundamental differences in qualification philosophy between organizations for what an OQ and PQ should contain. Therefore, there is considerable variation in the content and when/who should perform OQ and PQ. For laboratories that use a number of suppliers to perform analytical instrument qualification, this can result in a fragmented qualification rationale across the laboratory and conflicting approaches that the laboratory then must carefully defend in an audit.


In the absence of a more definitive guide from regulators, the pharmaceutical industry looked to Good Automated Manufacturing Practice (GAMP) for a validation framework. Originally known as the Pharmaceutical Industry Computer Systems Validation Forum (PICSVF), GAMP was founded in 1991 in the UK and has published a series of good practice guides (GPG); the best known being the Good Automated Manufacturing Practice Guide for Validation of Automated Systems in Pharmaceutical Manufacture. GAMP 4 was widely applied and adopted within the pharmaceutical industry, with the last major revision (GAMP 5) released in January 2008.4 Historically, GAMP approached instrument qualification from a software-driven perspective that focused on documentation of the qualification evidence and, to some extent, moved away from more direct consideration of the outcomes or instrument application.

In GAMP 5, equipment is classified according to four categories (Category 2 from earlier versions has since been dropped):

  • Category 1 — operating system/infrastructure software.
  • Category 3 — non-configurable commercial off the shelf (COTS).
  • Category 4 — configurable COTS.
  • Category 5 — custom software.

The removal of Category 2 is inconsistent with the approach contained within analytical instrument qualification (AIQ). Instruments in USP <1058> Group B (see later) would have typically been classified a GAMP Category 2 in GAMP 4. GAMP 5 is more generic as a project-driven guide for qualification of computerized systems and includes greater 'scalability' for systems in different GAMP categories. The rigorous approach contained within GAMP 5 is increasingly beneficial as equipment and software become more complex and bespoke.

lorem ipsum