Overcoming Disincentives to Process Understanding in the Pharmaceutical CMC Environment

Larger and strategic sampling and testing plans can improve process understanding and characterization.


(PHOTOS.COM)
Recent initiatives such as Pharmaceutical CGMPs for the 21st Century and Quality by Design encourage the application of statistical control procedures and enhanced science and engineering knowledge to manufacturing processes (1, 2). To better understand and characterize processes, increased or strategic sampling and batch testing (possibly using nontraditional approaches such as process analytical technology [PAT]) are beneficial. The benefits of these initiatives cannot be fully realized, however, without the flexibility of and allowance for better data collection schemes as well as appropriately calibrated acceptance criteria. Systems should not discourage the gathering of greater amounts of data, and appropriate statistical techniques must be used when needed.

Variability reduction through process understanding is a common element shared by quality and productivity improvement objectives. Many companies are not obtaining the benefits of a variability reduction system because of real or perceived regulatory hurdles. Several current practices of regulatory quality systems penalize a company in its attempt to gain better process understanding through the collection of more data. These include the use of individual results instead of averages as batch estimates, the misuse of 3σ control chart limits as acceptance criteria, the misinterpretation of compendial tests, and fixed acceptance criteria regardless of sample size. Some of these concerns have been published:

USP compendial limits are a go/no-go. The product either passes or fails. If a product sample fails, anywhere, any time, an investigation and corrective action may be required by the company or by FDA, or by both....These procedures should not be confused with statistical sampling plans ... extrapolations of results to larger populations are neither specified nor proscribed by the compendia (3).

and

Quality control procedures, including any final sampling and acceptance plans, need to be designed to provide a reasonable degree of assurance that any specimen from the batch in question, if tested by the USP procedure would comply (4).

Today's reality is a compliance-driven focus rather than a process understanding focus. For example, there is no established procedure to modify release standards to account for increased sample size. Additional data collected to ensure and improve process and product quality may increase the probability of rejecting an acceptable batch. This could potentially result in a delay to a clinical study or product approval, or it may result in a threat to product supply continuity for an ongoing clinical study or to a marketed product. All of these results can adversely affect a patient's access to a vital medicine. This reality appears to be strongly at odds with recent initiatives. The current system favors minimal testing strategies and presents a barrier to creating greater process understanding.

This article discusses some issues surrounding the disconnect between specifications and sample sizes and suggests defining acceptable quality as referring to the true batch parameter (e.g., average, relative standard deviation). Moreover, the statistical phenomenon of multiplicity is defined and its effects in the chemistry, manufacturing, and control (CMC) environment are explored. Situations are described in which this phenomenon can act as a testing disincentive and thus a barrier for the implementation of new and promising US Food and Drug Administration initiatives. Finally, remedies in thinking and procedures to avoid these undesirable effects are suggested.

Multiplicity and error in decision-making

Many statistical decisions are based upon a counterproof of sorts: One makes an assumption, collects data, and determines the probability of obtaining the results actually collected (given the assumption). If this probability is very low, either the results or the assumption must be wrong. Because one has confidence in the experiment, it can be concluded that the assumption was incorrect (because an "unlikely" outcome was obtained).