Reportable Values: Where is the Variation Coming From?

Published on: 
Pharmaceutical Technology, Pharmaceutical Technology-05-02-2018, Volume 42, Issue 5
Pages: 58–60

This article looks at a simple structured approach to assigning variance contributions and to assuring that the analytical procedure is fit for purpose.

The reportable value from an analytical procedure has three elements in its error structure: the error of the procedure itself, the error associated with the sampling of the batch, and the error associated with the manufacturing process itself. The first two errors need to be controlled and be sufficiently small as to ensure that any out-of-specification (OOS) result is unlikely to be from these causes. To do this, the developed procedure must enable the target measurement uncertainty (TMU), as defined in the analytical target profile, to be achieved on a routine basis via the analytical control strategy. The United States Pharmacopeial Convention (USP) Validation and Verification Expert Panel has been working on a new general chapter regarding analytical procedure lifecycle management (1–5).

For many existing procedures, however, the knowledge base regarding these three components is lacking. Therefore, one must resort to performing designed experiments to separate these variance components.
Consider the analysis of a drug product for an analyte, X, from an established manufacturing process and procedure that has a registered specification of 95-105% of claim. The USP analytical procedure uses a single sample from a batch and a singlet determination. This product has a history of frequent OOS results.
Examination of the last 10 batches, which are without adverse trend, reveals 100.0% is retained and 2.977 is replaced by 2.98. The process capability report based on these values is shown in Figure 1. Ten batches are used for illustrative purposes, and for reliable values of Cpk, Ppk, etc., many more batches would be needed.
Based on the available information, however, it would be expected that 9.3% of batches would be OOS. 
The issue now becomes one of identifying the size of the contributory sources of variance. In this instance, there are three: the analytical testing procedure itself, VT; the sampling process, VS; and the production process, VP. Because these sources of variation are independent, the total variance, V, is simply the sum of the individual variances.

One can design a simple experiment to estimate the various contributions by:

Sampling and testing 10 batches once; V =VT+VS+VP
From one batch, sampled 10 times, test each sample once; V =VT+VS
From one sample, test it 10 times; V =VT.

For this example process, the results are shown in Table I.


It is critical to estimate the size of variance contributions of the analytical testing procedure, the sampling process, and the production process. 

Contributory variances can be estimated using simple arithmetic (Table II).


It is immediately apparent that the majority of the variation (59.4%) is coming from the process itself and about one-third from the analytical testing procedure. The sampling variance contributes only a little. One can calculate the individual standard deviations and, with the means draw, the overall picture of the variability (Figure 2).

From an analytical perspective, the ideal but unobtainable situation would be to a target measurement uncertainty of zero. Therefore, the question becomes one of how small a TMU for the procedure would be needed. In addition, would replication help to avoid OOS results due to the testing variance?

First, however, one should be confident that this simple approach is consistent by using a different approach.

Confirmation of the error structure using Monte Carlo Simulation

There are three components to the error structure coming from the testing, the sampling, and the process itself. A value from each of the three distributions can be selected, at random, as shown in Figure 2 and for each of these values, the error associated with each value is calculated.

Advertisement

The analytical testing error, eT, would be the mean (99.3) minus the value selected and similarly for sampling, eS, and manufacturing, eP’ using the appropriate mean values of 100.6 and 100.0, respectively. Note that these errors may be both positive and negative.

Then the observed reportable value would be (100+eT+ eS+ eP). If the Monte Carlo Simulation is used (6),  then the results from 1 million iterations should give data that closely match those found in Figure 1, which they do. The result is shown in Figure 3


As the sampling and production variations are not controllable during testing, an attempt might be made to increase replication and/or to improve the target measurement uncertainty. However, Table III clearly demonstrates that increased replication does not give a major reduction in % OOS of the reportable values. Even if there were zero testing error, the predicted % OOS of the reportable values is still 4% (Figure 4).


Conclusion

It is critical to estimate the size of variance contributions of the analytical testing procedure, the sampling process, and the production process. In this simple approach, the analytical testing variance is estimated from the repeatability, which is smaller than the intermediate precision, which would be a better estimate. A more complex design can be made to estimate the intermediate precision. 

In this example, the combined production and sampling variance is so large that the process will always produce more than 4% OOS reportable values in the long term, irrespective of the testing variance. The production standard deviation needs to be reduced to approximately 1% and the testing to the same; and the sampling standard deviation needs to be reduced to 0.6 before a Ppk of 1.1 is obtained, predicting about 0.1% of OOS reportable values. Control strategies need to be in place to achieve these values.

References 

1. G. P. Martin et al., “Proposed New USP General Chapter: The Analytical Procedure Lifecycle <1220>,” Pharmacopeial Forum 43(1) 2017.
2. G. P. Martin et al., “Lifecycle Management of Analytical Procedures: Method Development, Procedure Performance Qualification and Procedure Performance Verification,” Pharmacopeial Forum 39(5) 2013.
3. K L Barnett et al., “Analytical Target Profile: Structure and Application Throughout the Analytical Lifecycle,” Pharmacopeial Forum 42(5) 2016.
4. C. Burgess et al., “Fitness for Use: Decision Rules and Target Measurement Uncertainty,” Pharmacopeial Forum 42(2) 2016.
5. E. Kovacs et al., “Analytical Control Strategy,” Pharmacopeial Forum 42(5) 2016.
6. Companion by Minitab v5.1.1.0 

Article Details

Pharmaceutical Technology
Vol. 42, No. 5
May 2018
Pages: 58-60

Citation

When referring to this article, please cite it as C. Burgess, "Reportable Values: Where is the Variation Coming From?," Pharmaceutical Technology 42 (5) 2018.