Quality by Design for Analytical Methods: Implications for Method Validation and Transfer

Published on: 
Pharmaceutical Technology, Pharmaceutical Technology-10-02-2012, Volume 36, Issue 10

The authors describe how traditional approaches to analytical method and validation may benefit from alignment with quality-by-design concepts.

Adoption of quality-by-design (QbD) concepts in pharmaceutical development and manufacture is becoming increasingly well-established. QbD concepts are aimed at improving the robustness of manufacturing processes based upon adopting a systematic and scientific approach to development and implementing a control strategy based on the enhanced process understanding this provides. Many pharmaceutical companies have also recognized that QbD concepts can be used to improve the reliability of analytical methods. The authors describe how traditional approaches to analytical method transfer and validation also may benefit from alignment with QbD concepts and propose a three-stage concept to ensure that methods are suitable for their intended purpose throughout the analytical lifecycle: method design, method qualification, and continued method verification. This paper represents a refinement and enhancement of the concepts originally proposed in an article written by P. Nethercote, T. Bennett, P. Borman, G. Martin, and P. McGregor (1).

To help in implementation of the goals of FDA's Pharmaceutical cGMPs for the 21st Century–A Risk-Based Approach (2), FDA recently issued guidance for industry describing the general principles and practice of process validation, which seeks to align process validation activities with product lifecycle concepts. This guidance (3) addresses some of the issues with traditional approaches to process validation where a focus on a one-time, three batch approach, with the use of the best talent during the day shift with the same lot of raw material does little to ensure that the manufacturing process is and will remain in a state of control. The traditional approach to process validation encourages a "do not rock the boat" mindset since the product is approved and the process is validated and fails to foster continuous improvement in quality or efficiency (4).

These issues have parallels in analytical method validation. Analytical methods for pharmaceutical products are validated in accordance with the International Conference on Harmonization (ICH) Q2 Guideline, Validation of Analytical Procedures: Text and Methodology, usually by the experts who have been involved in developing the method (5). Method validation is often treated as a one-time event with no guidance on how to ensure continuing focus on consistent method performance. There also is lack of guidance on how to demonstrate in practice that a method is fit-for-purpose (i.e., what are suitable acceptance criteria). There is potential for the validation process to seem more focused on producing validation documentation that will withstand regulatory scrutiny than on ensuring that the method will actually perform well during routine application. There is a risk that both regulatory authorities and industry use ICH Q2 in a check-box manner rather than its intent, which is to provide guidance on the philosophical background to method validation.

After the method has been validated by the developing group, it may be transferred to another laboratory, which involves transferring the knowledge of how to operate the method to those who will use it routinely and documenting that both parties obtain comparable results. The routine operating environment, however, is not always considered during the method-development and validation exercise. The lack of an effective process for capturing and transferring the tacit knowledge of the development analysts can cause methods to fail to perform as intended in the receiving laboratory. Much effort is then expended on identification of the variables that are causing the performance issues and the exercise is repeated. As in the case of the initial method validation activity, the transfer exercise is typically performed as a one-off process. There is a risk that the exercise will focus more on producing the method-transfer report than on ensuring the ability of the receiving laboratory to run the method accurately and reliably and ensuring the continuity and integrity of analytical results.

The recognition that an analytical method can be considered a process that has an output of acceptable quality data led Borman et al. to take the QbD concepts designed for manufacturing processes and show how these could also be employed for analytical methods (6). It follows, therefore, that the concepts of lifecycle validation being developed for manufacturing processes might also be applicable to analytical methods. This concept aligns well with the lifecycle concept of equipment qualification in the United States Pharmacopeia (USP), consisting of equipment design, followed by operational and performance qualification, and with analytical method validation activities proposed by Ermer and Landy (7, 8).

A QbD framework for analytical lifecycle management

QbD is defined as "A systematic approach to development that begins with predefined objectives and emphasizes product and process understanding based on sound science and quality risk management" (9). FDA has proposed a definition for process validation that is "the collection and evaluation of data, from the process design stage throughout production, which establishes scientific evidence that a process is capable of consistently delivering quality products" (3). When considering a lifecycle approach to method validation a similar definition could be adopted, "the collection and evaluation of data and knowledge from the method design stage throughout its lifecycle of use, which establishes scientific evidence that a method is capable of consistently delivering quality data" (1). A method, as defined in this article, is a synonym for analytical procedure and includes all steps of the procedure (e.g., sample preparation, analytical methodology, calibration, definition of the reportable result, and specification limits). From these definitions, it can be seen that there are a number of key factors that are important in a QbD lifecycle approach. These include:

  • The importance of having predefined objectives

  • The need to understand the method (i.e., having the ability to explain the method performance as a function of the method input variables)

  • The need to ensure that controls on method inputs are designed such that the method will deliver quality data consistently in all the intended environments in which it is used

  • The need to evaluate method performance from the method design stage throughout its lifecycle of use.

In alignment with the approach proposed in the FDA guidance for process validation, it is possible to envisage a three-stage approach to method validation.

  • Stage one: method design. The method requirements and conditions are defined according to the measurement requirements given in the analytical target profile and the potential critical controls are identified.

  • Stage two: method qualification. During this stage, the method is confirmed as being capable of meeting its design intent and the critical controls are established.

  • Stage three: continued method verification. Ongoing assurance is gained which ensures the method remains in a state of control during routine use. This includes both continuous method performance monitoring of the routine application of the method as well as a method performance verification following any changes.

Measurement requirements

Before commencing method validation, it is key to understand what the product critical quality attributes and process control requirements are. These requirements form the basis for the development of an Analytical Target Profiles (ATP) (10). While the paper in reference 10 introduced the concept of an ATP and described how it could have potential as a tool to facilitate regulatory oversight of change, its principal aim is to act as the focal point for all stages of the analytical lifecycle including method validation, which is the focus of this paper.

To build the ATP, it is necessary to determine the characteristics that will be indicators of method performance. These should include all of the characteristics that will ensure the measurement produces fit-for-purpose data and are likely to be a subset of those described in ICH Q2 (e.g., accuracy, precision) (5).

Once the important method characteristics are identified, the next step is to define the target criteria for these (i.e., how accurate or precise the method needs to be). After ensuring safety and efficacy, a key factor in selection of the appropriate criteria is the overall manufacturing process capability. Knowledge of the proposed specification limits and the expected process mean and variation is helpful in setting meaningful criteria. To draw a parallel to qualification of new analytical equipment, the ATP is similar to a user requirement specification that would be produced to support qualification of new analytical equipment.

Stage one: method design

The method design stage involves selecting appropriate technologies and developing a method that will meet the ATP requirements. Appropriate studies are then performed to understand the critical method variables that need to be controlled to ensure the method is robust and rugged.

Method development. Once the ATP has been defined, an appropriate technique and method conditions are selected that will likely meet the requirements of the ATP as well as business needs. This step can range from developing a new method to making a change to an existing method. While method development is obviously a very important part of the method lifecycle, it is not necessary to elaborate here because it has been extensively addressed in the literature.

Method understanding. Based on an assessment of risk (i.e., the method complexity and potential for robustness or ruggedness issues), an exercise focused on understanding the method (i.e., understanding which key input variables impact the method's performance characteristics) may be performed. From this a set of operational method controls is identified.Experiments can be undertaken to understand the functional relationship between method input variables and each of the method performance characteristics. Knowledge accumulated during method development provides input into a risk assessment. Tools, such as the fishbone diagram and failure mode effects analysis (FMEA), can be used to determine which variables need studying and which require controls. Robustness experiments are typically performed on method factors using design of experiments (DoE) to ensure that maximum understanding is gained from a minimum number of experiments. The output from the DoE should be used to ensure the method has well-designed system-suitability tests, which can be used to ensure that a method meets ATP requirements (i.e., is operating in the method design space).

Advertisement

When developing an understanding of the method's ruggedness, it is important that variables that the method is likely to encounter in routine use are considered (e.g., different analysts, reagents, instruments). Tools such as measurement system analysis (i.e., precision or ruggedness studies) can be useful in providing a structured experimental approach to examining such variables (11). Precision or ruggedness studies may instead be performed as part of Stage two, particularly if a developer has sufficient prior knowledge to choose appropriate method conditions and controls.

Method design output. A set of method conditions and controls that is expected to meet the ATP should be developed and defined. These conditions should be optimized based on an understanding of their impact on method performance.

Stage two: method qualification

Having determined a set of operational method controls during the design phase, the next step is to qualify that the method will operate in its routine environment as intended, regardless of whether this is research and development or industrial quality control. Method qualification involves demonstrating that the defined method, including specified sample and standard replication levels and calibration approaches, will, under routine operating conditions, produce data that meet the precision and accuracy requirements defined in the ATP. This may involve performing a number of replicate measurements of the same sample to confirm that the precision of the method is adequate and to demonstrate that any potential interferences do not introduce an unacceptable bias by comparing results with a sample of known quality. If the respective experimental results have already been obtained during Stage one, they only need to be summarized for the final evaluation.

Stage three: continued method verification

The goal of this stage of the method lifecycle is to continually ensure that the method remains in a state of control during routine use. This includes both continuous method-performance monitoring of the routine application of the method as well as performance verification following any changes.

Continued method performance monitoring. This stage should include an ongoing program to collect and analyze data that relate to method performance (e.g., from replication of samples or standards), by trending system suitability data, assessing precision from stability studies (12), or by trending data from regular analysis of a reference lot. This activity aligns with the guidance in USP Chapter <1010> on system performance verification (13). Close attention should also be given to any out-of-specification (OOS) or out-of-trend (OOT) results generated by the method once it is being operated in its routine environment. Ideally, by using a lifecycle approach to method validation, laboratories should encounter fewer analytically related OOS results, and if they do, it will be easier to determine or exclude a root cause. Monitoring performance parameters also serves to control method adjustments (i.e., changes within the method design space).

Method performance verification. Method performance verification is undertaken to verify that a change in the method that is outside the method design space has no adverse impact on the method's performance. The activities required to be performed as part of method performance verification are determined through risk assessment of the impact of the change on the ability of the method to meet the requirements of the ATP. These activities may range from a review to ensure that the post-change operation of the method continues to meet the system suitability requirements to performing equivalency studies aimed at demonstrating that the change has not adversely affected the method's accuracy or precision. (See Appendix 1 for examples of how a risk assessment could be performed.)

Change control

During the lifecycle of a product, both the manufacturing process and the method are likely to experience a number of changes through continuous improvement activities or the need to operate the method or process in a different environment. It is essential that all changes to the method's operating conditions are considered in light of the knowledge and understanding that exists on the method performance. For all changes, a risk assessment should be carried out and appropriate further validation activities performed. (See Appendix 2 for examples of actions for different types of changes.)

Method installation

If a change involves operation of the method in a new location, appropriate method-installation activities, including knowledge transfer, need to be performed in addition to a method-performance verification exercise. Method installation focuses on ensuring that the location at which the method is intended to be operated is adequately prepared to use the method. It includes ensuring that the analytical equipment is qualified and appropriate knowledge transfer and training of analysts has been performed. The method conditions and detailed operating controls along with all the knowledge and understanding generated during the design phase are conveyed to the location in which the method will be used. Performing a method-walkthrough exercise with the analysts in the original and new locations can be extremely valuable in ensuring all tacit knowledge about the method is communicated and understood. The extent of the method-installation activities should be based on an assessment of risk and should consider, for example, the level of preexisting knowledge of the analysts in the new location with the product, method, or technique. As part of the initial qualification of a method, a second laboratory may be involved in producing data to determine the method's reproducibility. In such a case, the second laboratory can be considered as being within the method design space, and any subsequent operation of the method in that laboratory would not be considered a change. Nevertheless, the described activities with respect to method installation would be performed before starting the reproducibility study.

Other scenarios

This approach to method qualification focuses on activities that would typically be performed for a method that is developed and used within a single company. Other scenarios exist in which a laboratory may need to use a method for which it has no access to the original method design or qualification information, such as in a contract-testing laboratory. In these situations, it is important that the performance requirements of the method are considered and an ATP is defined and documented. An appropriate qualification study is then performed to demonstrate that the method meets its ATP.

Implications

Adopting a QbD approach to analytical-method lifecycle management would have significant implications for analytical scientists in the pharmaceutical industry. Industry and regulatory authorities will need to modify the way they use ICH Q2, which, ideally, would prompt a revision of this guidance to align it with the lifecycle-validation concepts promoted by ICH Q8, Q9, and Q10 (9, 14, 15). The need for a revision of ICH Q2 as a consequence of increasing adoption of QbD concepts and use of PAT has also been identified by Criuzak (16).

The activities that were previously defined as method transfer (i.e., knowledge transfer and confirmation of equivalence) would become intrinsic components of the lifecycle validation approach (i.e., they would be described as method installation and method performance verification activities) and would be traced back at all stages to the ATP requirements, rather than being treated as distinct from traditional method validation.

A key advantage of adopting the approach described in this article is the flexibility to perform all the validation stages against the specific ATP defined for the intended method use. This would eliminate the approach of creating a validation document against ICH Q2 in a check-box manner, which can lead to unnecessary and non-value-adding work. Because this approach could be adopted for all users of analytical methods, it also offers the potential to standardize industry terminology and create a harmonized method validation approach. This approach aligns terminology to that used for process validation and equipment qualification, supports a lifecycle approach, removes existing ambiguities in validation terms (e.g., method validation, revalidation, transfer and verification), and clarifies what is required for each part of the process. Table I summarizes this comparison of the traditional and lifecycle approaches to method validation.

Table I: Comparison of traditional and lifecycle approaches to analytical method validation.

Conclusion

The switch to a QbD approach to method development is already beginning to bring improvements to the performance of analytical methods. Opportunities also exist to modernize and standardize industry's approach to method validation and transfer. By aligning method validation concepts and terminology with those used for process validation as well as equipment qualification, there is an opportunity to ensure that efforts invested in method validation are truly value adding, rather than simply being a check-box exercise, and to reduce confusion and complexity for analytical scientists.

Glossary

Analytical Target Profile (ATP). The combination of all performance criteria required for the intended analytical application that direct the method development process. An ATP would be developed for each of the attributes defined in the control strategy. The ATP defines what the method has to measure (i.e., acceptance criteria) and to what level the measurement is required (i.e., performance level characteristics, such as precision, accuracy, working range, sensitivity, and the associated performance criterion). The ATP requirements are general ones and linked primarily to the intended purpose, not to a specific method. Any method conforming to the ATP is considered suitable. The ATP can be regarded as the focal point for all stages of the analytical life cycle.

Method Design. The collection of activities performed to define the intended purpose of the method, select the appropriate technology, and identify the critical method variables that need to be controlled to assure the method is robust and rugged.

Method Development. The collection of activities performed to select an appropriate technique and method conditions that should have the capability to meet the ATP requirements.

Method Design Space (MDS). The multidimensional combination and interaction of input variables (e.g., material attributes) and method parameters that have been demonstrated to provide assurance of data quality. In contrast to the ATP, the MDS is related to a specific method. Method Design Space is also known as Method Operable Design Region (MODR)

Method Understanding. The knowledge gained from the collection of activities performed to understand the relationship between variation in method parameters and the method performance characteristics.

Method Installation. The collection of activities necessary to ensure that a method is properly installed in its selected environment. This will include the knowledge transfer activities, such that the laboratory understands the critical control requirements of the method and the activities required to ensure that the laboratory can meet these requirements (e.g., purchasing of appropriate consumables, such as columns and reagents, and ensuring that reference standards are available, appropriate equipment is available, and appropriate training is given to personnel. Method Installation will only be a separate activity in cases where the method is not designed and developed in the routine laboratory.

Method Qualification (MQ). The collection of activities and/or results necessary to demonstrate that a method can meet its ATP. Experiments that are performed as part of the MQ exercise must be appropriate for the specific intended use as defined in the ATP. This determination may involve demonstrating the method has adequate precision and accuracy over the intended range of analyte concentrations for which it will be used.

Continued Method Verification (CMV).The activities that are performed to continually ensure that the method remains in a state of control during routine use. This includes both continuous method performance monitoring of the routine application of the method as well as a method performance verification following any changes.

Continuous Method Performance Monitoring (CMPM). The activities that demonstrate that a method continues to perform as intended when used with the actual samples, facilities, equipment, and personnel that will routinely operate the method.

Method Performance Verification (MPV). The activities that demonstrate that a method performs as intended following a change in the method operating conditions or operating environment. The need and extent of method performance verification is determined through risk assessment.

References

1. P. Nethercote, et al., "QbD for Better Method Validation and Transfer," Pharmamanuf. online, www.pharmamanufacturing.com/articles/2010/060.html, accessed June 10, 2012.

2. FDA, Pharmaceutical cGMPs for the 21st Century—A Risk-Based Approach (Rockville, MD, 2004).

3. FDA, Guidance for Industry—Process Validation: General Principles and Practices (Rockville, MD, Jan. 2011).

4. M. Nasr, presentation at the AAPS Workshop (North Bethesda, MD, Oct. 5, 2005).

5. ICH, Q2(R1), Validation of Analytical Procedures: Text and Methodology, Step 4 version (2005).

6. P. Borman, et al., Pharm. Tech., 31 (10), 142–152 (2007).

7. USP General Chapter <1058>, "Analytical Instrument Qualification."

8. J. Ermer and J.S. Landy, "Validation of analytical procedures," in Encyclopedia of Pharmaceutical Technology, J. Swarbrick and J.C. Boylan, Eds. (Dekker, New York, 2nd ed., 2002), pp. 507–528.

9. ICH, Q8, Pharmaceutical Development (2009).

10. M. Schweitzer, et al., Pharm. Tech., 34(2), 52–59 (2010).

11. P.J. Borman, et al., Anal. Chim. Acta., 703(2), 101–113 (2011).

12. J. Ermer, et al., J. Pharm. Biomed. Anal., 38(4) 653–663 (2005).

13. USP General Chapter <1010>, "Analytical Data Interpretation & Treatment."

14. ICH, Q9, Quality Risk Management (2005).

15. ICH, Q10, Pharmaceutical Quality System (2008).

16. E. Ciurczak, "ICH Guidances (Already) Show Their Age," www.pharmamanufacturing.com/articles/2008/158.html, accessed June 10, 2012.

Appendix I

The following examples use risk assessment to determine actions to be taken as part of method performance verification. The aim of the method performance verification exercise is to provide confidence that the modified method will produce results that meet the established criteria defined in the analytical target profile (ATP), which may be assessed by considering each of the method's characteristics and rating the risk that the change in the method will affect each of these characteristics.

Risk assessment tools can be used to provide guidance on what actions are appropriate to verify the method is performing as required.

The following rating system was used for assessing the possibility of an impact in the two examples of risk assessment shown in A1 Tables I and II:

  1. 0 - No possibility of an impact

  1. 1 - Very low possibility of an impact

  1. 3 - Slight possibility of an impact

  1. 5 - Possible impact

  1. 7 - Likely impact

  1. 9 - Strong likelihood of impact

A1, Table I: Importance of change in column manufacturer for content and impurities by a high-pressure, liquid chromatography method.

A1, Table II: Importance of an increase in column equilibration time prior to next injection for content and impurities by a high-pressure, liquid chromatography method.

If a risk to a certain method performance characteristic has been mitigated through other controls (e.g., system suitability), the risk for that characteristic should be reduced accordingly. It should be noted that some methods do not need to be validated for all characteristics; for example, an API assay method may not require sensitivity. In these cases, put a 0 or ''not applicable'' in that category.

Based on the scoring, a decision is made on which characteristics need to be demonstrated (i.e., verified) as meeting the method performance criteria.

For the following examples, any individual score of 5 or above will require work to ensure the suitability of the modified method. Where a score of 5 or above for accuracy is recorded, an equivalence study will be required. Other characteristics that are scored equal to or greater than 5 will require appropriate verification.

Table I shows a risk assessment for a change in the high-pressure, liquid chromatography (HPLC) column manufacturer for a content and impurities method. In this example, the only aspect of the method that is changed is the analytical column manufacturer. The technique has not changed, nor has the basic chemistry driving the technique. The most likely impacts on the reported values for the method are to the selectivity for impurities, which in turn has a potential effect on the accuracy of impurity quantitation. Therefore, for the example situation presented in Table I, it is necessary to assess equivalence between the registered method and modified method for the impurity identification and content. The modified method for impurities should also be verified for selectivity, because the rating for this parameter is greater than or equal to 5. Unless the peak width and/or tailing factor are significantly adversely affected by the change, the sensitivity, precision, and linearity of the method are not likely to be affected by a change in the chromatographic column manufacturer. Moreover, sensitivity and precision are controlled in most methods through the use of system suitability requirements and do not represent a significant risk to the method.

Appendix 2

A2, Figure 1 shows examples of changes that may occur during a method's lifecycle and illustrates the actions that may be taken.

A2, Figure 1. Changes that may occur during a method’s lifecycle and illustration of the action that may be taken.

Example 1. A change in method operating conditions is made inside the method design space (e.g., changing an HPLC flow rate from 1 mL/min to 1.5 mL/min for a method where a range of 1mL/min to 2 mL/min was proven during the method design stage). No action is required; implement change. This change corresponds to an adjustment from the regulatory perspective.

Example 2. A change in method operating conditions occurs outside the method design space (e.g., changing a flow rate to 0.8 mL/min for the method used in the previous example, change to a method noise factor, change to a reagent source). The action is to perform a risk assessment to consider which characteristics in the ATP may be impacted by the change and then perform an appropriate method performance verification study to confirm the change does not impact the method's ability to meet the ATP.

Example 3. A change to location is made. The action is to perform a risk assessment to consider the risks created by the change. Take appropriate action to ensure method controls are adequately installed following the change (e.g., provide training, transfer knowledge, establish controls for new supplier of reagents). Perform an appropriate method performance verification study to confirm that the change does not adversely impact the method's performance.

Example 4. A change is made to a new method or technique (i.e., a change that requires a new MDS to be established for improvement or business reasons, for example). The action is to perform method development and understanding (stage 1) and method qualification (stage 2) to demonstrate conformance of the new method to the ATP.

Example 5. A change impacts the ATP (e.g., specification limit change, a need to apply the method to levels of analytes not considered in the original ATP). The action is to update the ATP, review the existing method qualification data, and confirm if the method will still meet the requirements of the existing ATP. If not, revisit the method design stage and requalify the updated method.

Phil Nethercote* is analytical head for global manufacturing and supply at GSK, Shewalton Road, Irvine, Ayrshire, Scotland KA11 5AP, Phil.W.Nethercote@gsk.com and Joachim Ermer, PhD, is head of quality control services chemistry at Sanofi in Frankfurt, Germany.

*To whom all correspondence should be addressed.