Dispelling Cleaning Validation Myths: Part II

Published on: 

Pharmaceutical Technology Europe

Pharmaceutical Technology Europe, Pharmaceutical Technology Europe-12-01-2005, Volume 17, Issue 12

In applying a visually clean standard, any residue related to the cleaning process that is visible on the surface should constitute a failure.

There are eight 'myths' surrounding cleaning validation for pharmaceutical process equipment and they revolve around issues associated with setting limits, selecting sampling procedures and using analytical methods to measure residues. These myths can be overly prescriptive or overly proscriptive in terms of what actions can actually be taken. Most add little value to a cleaning validation programme and can be ignored because they have no scientific or regulatory basis. I covered the first three of the myths listed below in Part I of this article and will discuss the remaining five this month.1 The eight popular myths are:

1. Regulatory authorities do not like rinse sampling.

2. You must correlate rinse sampling results with swab sampling results.

3. You cannot use nonspecific analytical methods.

4. If you use total organic carbon (TOC), you must correlate it with a specific method such as HPLC.

5. Any measured residue is unacceptable.

6. Dose-based calculations are unacceptable.

7. Recovery percentages of different spiked-levels should be linear.

8. You cannot validate manual cleaning.

Myth 4

Advertisement

This myth, which is related to myth 3, says you must correlate your TOC method with an HPLC method (or another specific analytical method). Again, this depends on what is meant by 'correlate'. If it means performing full method validation on both the HPLC and the TOC methods, the question should be: "Why is this necessary?" If a full method validation on the HPLC method is required, why not just use the HPLC method for measuring residues?

After all, if I've gone that far with the HPLC method, it would be wise to utilize that HPLC method for measuring residues of the target residue because of the greater likelihood that I would meet my acceptance criteria (see myth 3 above for more on this). If correlation of TOC with an HPLC method just means performing my TOC method validation on known standards (perhaps standardized by an HPLC analytical method) to determine parameters such as accuracy and precision, then this is not really a special requirement. For example, I should do the same thing for the HPLC method.

I can only speculate that this myth comes from someone who thought that TOC was unacceptable, and tried to add some additional supporting data to demonstrate that it was acceptable. However, as discussed in myth 3, TOC is (or should be) acceptable to regulatory authorities.

If one is expecting correlation between TOC and HPLC methods on samples obtained in cleaning validation protocols, then this is an unreasonable expectation. Yes, the HPLC method will tell you how much of the target residue (for example, the active) is present. However, TOC is subject to a variety of 'interferences' (excipients and cleaning agent) that raise the measured carbon value. One cannot expect that the ratio of organic carbon from the active to the combined TOC is going to be constant in every sample (which could perhaps give a correlation). The bottom line is that TOC, appropriately validated, is sufficient without any correlation to a specific method.

Myth 5

This is a really hard one to swallow. As analytical methods can measure residues at lower levels, it is inevitable that you may have a measurable amount of certain residues. If TOC is used as an analytical method, because of the various 'interfering' substances, it is also generally not true that all your samples will return to below the limit of detection.

Before we discuss this, it is probably necessary to state that there are some conditions where any residue is unacceptable. In applying a visually clean standard, any residue related to the cleaning process (active, excipient, cleaning agent, degradant or cleaning by-product) that is visible on the surface should constitute a failure.2 Furthermore, for certain potent actives, allergenic actives or actives with reproductive hazards manufactured on nondedicated equipment, a reasonable standard is that the target residue is below the limit of detection by the best available analytical procedure.3 With at least those two exceptions, in most other cases measured residue is acceptable provided it is below the acceptance limit.

The only basis for this myth is possibly the statements in both FDA and Pharmaceutical Inspection Cooperation Scheme (PIC/S) guidance that 'ideally' there should be no measured residues of detergent.3,4 FDA guidance modified this by stating that for 'ultrasensitive' analytical methods, 'very low' residues should be expected. I believe this myth may partly be because of the confusion of a 'residue' and a 'contaminant'. For the purpose of this article, a 'contaminant' is an unacceptable level of a residue.5

This is much like the maxim in toxicology that 'the dose makes the poison'. The acceptability of some measured residues is support by FDA in its Human Drug cGMP Note for the second quarter of 2001.6 In answer to the question: "Should equipment be cleaned to the best possible method of residue detection or quantification?" an emphatic "No" is the answer. In that same answer, FDA goes further to state that the real issues are whether the residue level is 'medically safe' and whether it 'affects product quality'.

Of course, this assumes that you are setting residue limits appropriately (such that the residues are medically safe and do not affect product quality). It also assumes that you have appropriately validated the analytical method. One possible objection is that some of the methods used for calculating limits give very high limits. That objection will be discussed in myth 6. The bottom line is that measured residues must be at levels that are medically safe and do not affect the product quality.

Myth 6

This relates to the fact that the traditional 0.001 dose calculation sometimes results in a limit that is extremely high. For example, in 2001 this traditional method of calculating limits for finished drug products was challenged by providing an example calculation where the allowable carryover to the next product was more than the next product batch size.7

Unfortunately, in that paper the dose-based calculation was misapplied, in that the maximum dose of the next product was given as the maximum dose of the next drug active, not the maximum dose of the next drug product (which is the correct calculation for finished drug products). A correct calculation would have lowered the calculated limit by a factor of more than 1000.

Furthermore, there are some restraints to the conventional dose-based calculation. One is that a 'default' limit of 10 ppm active in the next product is commonly used. That is, if the 0.001 dose calculation gives a value of greater than 10 ppm of the cleaned active in the next product, a default value of 10 ppm is used for subsequent calculations.

This makes sense because if 24 ppm is the calculated safe level based on the dose calculation, then 10 ppm should also be safe. A second restraint is that no residue of the active (or any combination of cleaning residues) may remain, such that the equipment is visually dirty. Visually dirty can be subjective depending on such things as the nature of the residue, the distance and angle of viewing, and the lighting level.2,8 However, such criteria provide a further restraint on unusual situations where the dose-based calculation may allow what seems to be unreasonable levels of residues.

This myth seems to have appeared and then disappeared as quickly. However, it should be clear from both FDA and PIC/S guidances that this approach is acceptable. FDA guidance does not explicitly say the dose-based calculation is acceptable, but it does footnote the 1993 article by scientists at Lilly who described this dose-based calculation.4,9 The PIC/S guidance is a little more explicit in that it calls for setting limits based on the most stringent of the 0.001 dose calculation, 10 ppm and visually clean.3 Ultimately, dose-based calculations are the current standard for most situations in which limits are calculated.

Myth 7

This myth essentially takes what is a common requirement for analytical method validation (determination of a linear response over a certain range) and applies it to swab recovery studies. If three or more spiked amounts (concentrations) are used for recovery studies, sometimes analytical groups expect that the different amounts (concentrations) should exhibit a linear relationship. Now it is a reasonable assumption that, in general, the higher the spiked amount in a swab recovery, the lower the percentage recovery.

However, for recovery studies it is not a reasonable expectation that the data will be linear in any sense. The reason being is that swabbing, like manual cleaning, is highly variable. One may spike at a fixed level, and one operator might get a recovery of 65% and another might get a recovery of 73%. There is not a significant difference between those two values (at least for swab recoveries). The variability in an individual swab recovery study is so wide that trying to establish linear relationships does not add any confidence or value. Now some might object that the difference between 65% recovery and 73% recovery is significant, in that it might be the difference between passing and failing in an executed cleaning validation protocol. The response to that concern is that if the difference between 65% and 73% is the difference between passing and failing, your cleaning process is not robust enough.

My guess is that this myth originates from an analytical group that misapplied what is a reasonable requirement, that analytical methods are linear over a certain range, to different spiked levels in swab recovery studies, where linearity is not a reasonable expectation. This idea of linearity in recovery studies is probably best avoided by only doing recovery studies with (at most) two spiked levels. The first spiked level should be the acceptance limit. If a second spiked level is desired (it certainly isn't necessary), then it should be done at approximately 20% of the acceptance limit. Having two data points avoids the issues of determining linearity (or if you are perverse, it assures linearity). All in all: "Don't expect different spiked levels in swab recovery studies to be linear." There is just too much variability in any individual measurement because of the fact that swabbing is similar to manual cleaning.

Myth 8

After talking about the variability of manual cleaning in myth 7, it might seem strange that I'm now talking about demonstrating the consistency of it. However, the issue in cleaning validation is not that the residue data have to be consistent (although that is something we would clearly like), but that the residue data have to be consistently below the acceptance criterion. Because of the variability of manual cleaning, it is necessary to build a lot more robustness into the process by more adequate instructions in the written cleaning procedure, as well as more training and retraining of the cleaning operators. While the same level of control that is possible with automated cleaning is generally not possible with manual cleaning, a reasonable effort must be made to achieve more control than was generally achieved before 1990.

This myth developed very early in cleaning validation, and was perhaps a response to regulatory authorities trying to justify why cleaning validation cannot and should not be done for manual cleaning processes. This appeal soon lost out and it is now a regulatory expectation to validate manual processes. FDA guidance says nothing directly about manual methods except for a comment relating to the need for "extensive documentation... and training" where operator performance is a problem.4 The PIC/S guidance states that "Manual methods should be reassessed at more frequent intervals than clean-in-place (CIP) systems."3 There may be some variations in interpretations of what this means, but it is clear that it is probably addressing the same issues of operator consistency that FDA guidance addresses (without specifying what the 'reassessment' might be).

In any case, while there is definitely a trend toward automated cleaning systems (whether they are CIP systems or automated COP parts washers), it is expected that manual cleaning processes will be validated. Yes, they are more variable than automated systems. But, this just means paying more attention to detail in the standard operating procedure, and to more frequent training and retraining of operators to help assure consistency so that actual residues are consistently below the acceptance limits. The bottom line is: "You will validate manual cleaning processes."

The eight myths covered in Part I and Part II of this article are not necessarily the only cleaning validation myths. Recognition of these myths, and their lack of both scientific justification and written regulatory justification, can help companies avoid unnecessary work, which adds little or no value to a cleaning validation programme.

Acknowledgment

This paper is based on a presentation to the Capital Chapter of the PDA in Gaithersburg, MD, USA on 30 October 2002.

References

1. D.A. LeBlanc, Pharm. Technol. Eur.17(11), 30–34 (2005).

2. D.A. LeBlanc, J. Pharm. Sci. Technol.56(1), 31–36 (2002).

3. PIC/S Document PI 006-2 www.picscheme.org

4. www.fda.gov/ora/inspect_ref/igs/valid.html

5. D.A. LeBlanc, Cleaning Memos 2, Cleaning Validation Technologies (March 2002).

6. FDA, Human Drug cGMP Notes, 2nd Quarter 2001, PDA Letter 38(7), 12–13 (2002).

7. A.L deMarco and S. Jansen-Varnum, "A Perspective on Equipment Cleaning — Industry Practice But Not cGMP", presented at Central Atlantic States Association of Food and Drug Officials Conference, King of Prussia, PA, USA (15 May 2001).

8. R.J Forsyth et al., Pharm. Technol.28(10), 58–72 (2004).

9. G.I. Fourman and M.V. Mullen, Pharm. Technol.17(4), 54–60 (1993).

Destin A. LeBlanc is a consultant at Cleaning Validation Technologies, San Antonio, TX, USA.