Latest Issue
PharmTech
Latest Issue
PharmTech Europe
Email Newsletters from Pharmaceutical Technology and Pharmaceutical Technology Europe  
News from Europe's pharmaceutical manufacturing industry coupled with upcoming events, and exclusive articles and interviews from industry experts. 
Statistical Considerations in Design Space Development (Part III of III)
Parts I and II of this article appeared in the July and August 2010 issues, respectively, of Pharmaceutical Technology and discussed experimental design planning and design and analysis in statistical design of experiments (DoE) (1, 2). This article, Part III, covers how to present and evaluate a design space. Design space is part of the US Food and Drug Administration's quality initiative for the 21st century which seeks to move toward a new paradigm for pharmaceutical assessment as outlined in the International Conference on Harmonization's quality guidelines Q8, Q9, and Q10. The statistics required for designspace development play an important role in ensuring the robustness of this approach. This article provides concise answers to frequently asked questions (FAQs) related to the statistical aspects of determining a design space as part of qualitybydesign (QbD) initiatives. These FAQs reflect the experiences of a diverse group of statisticians who have worked closely with process engineers and scientists in the chemical and pharmaceutical development disciplines, grappling with issues related to the establishment of a design space from a scientific, engineering, and riskbased perspective. Questions 1–7 appeared in Part I of this series (1). Questions 8–22 apeared in Part II (2). The answers provided herein, to Questions 23–29, constitute basic information regarding statistical considerations and concepts to consider when finalizing a design space and will be beneficial to a scientist working to develop and implement a design space in collaboration with a statistician. Presenting a design space This section reviews the presentation of the design space, including: tabular display of summary information, and graphical displays such as contour plots, threedimensional surface plots, overlay plots, and desirability plots. The authors discuss the presentation of the design space based on multifactor equations or a multidimensional rectangular rather than as a system of multifactor equations. Traditionally, one would evaluate the design space before finalizing the presentation. In this article, however, the presentation of the design space is provided first in order to explain the graphics used in the evaluation stage. Q23: How can a design space be presented?
Some effective graphical displays include contour plots, threedimensional surface plots, overlay plots, and desirability plots. Each graph has strengths and weaknesses. It is anticipated that multiple graphs or graph types may be needed to clearly display the design space.
When there is more than one quality characteristic in the design space, the use of overlay plots is helpful. Question 21 and Figure 8 in Part II of this article series provide an example of an overlay plot (2). Multiple response optimization techniques can also be used to construct a design space for multiple independent or nearly independent responses. Each response is modeled separately and the predictions from the models are used to create an index (called a desirability function) that indicates whether the responses are within their required bounds. This index is formed by creating functions for each response that indicate whether the response should be maximized, minimized, or be near a target value. The individual response functions are combined into an overall index usually using the geometric mean of the individual response functions.
Q24: Why are some design spaces multidimensional rectangular and others not?
In most cases, some feasible areas of operation will be excluded if the design space is specified by several ranges for individual factors. This approach will result in a multidimensional rectangle that will not be equal to the nonrectangular design space. Defining the design space as functional equations with restrictions as shown in Table III will enable one to take advantage of the largest possible design space region. The yellow region in Figure 12 is the space defined by equations with restrictions or specifications on the quality characteristics. The blue and black rectangles can be used as design space representations but neither one provides the largest design space possible. For easeofuse in manufacturing, it may be practical to use a multidimensional rectangle within the design space as an operating region. Evaluating a design space This section addresses variability within the design space and the implications on the probability of passing specifications, in particular, when the process operates toward the edge of the design space. Alternative methods to specify the design space to account for this variability are discussed using the example data provided in previous sections of this article (1, 2). Suggestions on the placement of the normal operating region (NOR) and confirmation of design space are provided as well. Q25: How do I take into account uncertainty in defining the design space for one factor and one response?
Figure 13 also illustrates an approach that makes use of a statistical interval to protect against uncertainty. In the region described by the striped rectangle in Figure 13, the probability of passing the specification increases from 50%, at B = 0.49, to a higher probability, at B = –0.08. Thus, a range for Factor B that protects against the uncertainty and provides higher assurance of the degradate being less than or equal to 1.00% is between –1 to –0.08. This range corresponds to the solid green region in Figure 13. The width of the interval and the increased probability will change based on the interval that is selected. There are multiple ways to establish intervals that can be calculated to protect against uncertainty. Question 26b provides more details on this increased assurance of quality. Q26a: How much confidence do I have that a batch will meet specification acceptance criteria? A: A design space is determined based on the knowledge gained throughout the development process. The goal of defining a design space is to demonstrate a region for the relevant parameters such that all specification acceptance limits are met. The size of the design space is represented by a region that is defined by parameter boundaries. Those parameter boundaries are determined by results of multifactor experiments that demonstrate where a process can operate to generate an acceptable quality drug product. Parameter boundaries of a design space do not necessarily represent the edge of failure (i.e., failure to meet specification limits). Frequently, those boundaries simply reflect the region that has been systematically evaluated. Different approaches may be used define the size of the design space. The approaches are based on the type of statistical design that was used; the accuracy of the model used to define the operating ranges; whether the boundaries represent limits or edges of failure; and the magnitude of other sources of variability such as analytical variance. Each approach provides a certain level of confidence that future batches will achieve acceptable quality or a certain level of risk that future batches will not meet acceptable quality. The approaches include using a statistical model based on regression, using an interval based on approach, or using mechanistic models. Q26b: Which statistical approaches ensure higher confidence?
These intervals can be thought of as providing some "buffer" around the region that is based on mean results. Although an interval will decrease the size of the acceptable mean response region, there is greater confidence that future batches within the reduced region would meet the specifications (providing the assurance required for a design space), especially if the mean response is closer to the specification. Use of intervals may not reduce the acceptable mean response region on all boundaries. There may be cases that within the experimental region for example, when the responses are not close to their specifications. In such cases, the predicted response at the boundary of the region where the experiments were conducted may be well within specification. Using the predicted response may be appropriate. For example, if the degradate in the example was never greater than 0.3% and the specification was 1.0%, then the use of intervals is unlikely to reduce the region. The use of an interval approach is a riskbased decision. Proper specification setting and use of control strategies should also be used to increase confidence in the design space and the strategy employed should fit the entire quality system. Q26c:. Are there any other considerations when using an interval approach? A: The interval approach incorporates the uncertainty in the measurements and the number of batches used in the experiments. As discussed in Question 7 (see Part I of this article series (1)), increasing the number of batches through additional design points or replicating the same design point increases the power of the analysis. If a small experiment is used to define the acceptable mean response region, then the predicted values will not reflect the small sample size. However, the interval approach DoEs reflect the small sample size resulting in a smaller region. In a similar way, if the variation in the data is large, then the interval approach will reduce the size of the region. A large difference between a region based on the acceptable mean response and one based on intervals indicates uncertainty in the mean response region. Often, an assumption when performing the statistical analysis is that the variability of the response is similar throughout the experimental region. In the situation where this assumption is not true, replication of batches near the boundary of the design space may be needed to increase the confidence that future batches will meet specifications and the variability explicitly modeled. Prior knowledge from other products or other scales may be incorporated into the estimates to increase confidence. Q27: Where should the NOR be inside the design space? How close can the NOR be to the edge of design space? A: Once the design space is established, there is often a desire to map out the area where one would operate routinely. Typically, NOR is based on target settings considering variability in process parameters. The target settings could be based on the optimality of quality, yield, throughput, cycletime, cost, and so forth. The NOR around this target could be based as a function of equipment and controlsystems variability. However, how close the NOR can be from the edge of the design space depends on how the design space is developed. For example:
Q28: I didn't run experiments along my design space boundary. How do I demonstrate that the boundary is acceptable? A: The design space will only be as good as the mathematical or scientific models used to develop the design space. These models can be used to produce predictions with uncertainty bands at points of interest along the edge of the design space, which is contained within the experimental region. If these values are well within the specifications and there is significant process understanding in the models, then the prediction may be sufficient. Q29: If I use productionsize batches to confirm my design space, how should I choose the number of batches to run, and what strategy should I apply to select the best points? A: There is no single recipe to choose the points to run in order to verify the design space when developed subscale. Several options are provided in the answer to Question 17 (see Part II of this article series (2)). Briefly, using either mechanistic or empirical models along with performing replicates could provide some idea of the average response along with an estimate of the magnitude of the variability. Alternatively, using existing models and running a few points at the most extreme predicted values may be a reasonable approach if the design space truly provides assurance that the critical quality attribute requirements will be met. Finally, a highly fractionated factorial (supersaturated) experiment of production size batches matched to the subscale batches is another way to confirm the design space. * For continuity throughout the series, Figures and Tables are numbered in succession. Figure 1 and Table I appeared in Part I of this article series. Figures 2–8 and Table II appeared in Part II. ** "Factor" is synonymous with "x," input, variable. A process parameter can be a factor as can an input material. For simplicity and consistency, "factor" is used throughout the paper. "Response" is synonymous with "y" and output. Here, "response" is either the critical quality attribute (CQA) or the surrogate for the CQA. For consistency, "response" is used throughout the paper. See Part I of this article series Acknowledgments: The authors wish to thank Raymond Buck, statistical consultant; Rick Burdick, Amgen; Dave Christopher, ScheringPlough; Peter Lindskoug, AstraZeneca; Tim Schofield and Greg Stockdale, GSK; and Ed Warner, ScheringPlough, for their advice and assistance with this article.
Stan Altan is a senior research fellow at Johnson & Johnson Pharmaceutical R&D in Raritan, NJ. James Bergum is associate director of nonclinical biostatistics at BristolMyers Squibb Company in New Brunswick, NJ. Lori Pfahler is associate director, and Edith Senderak is associate director, scientific staff, both at Merck and Co. in West Point, PA. Shanthi Sethuraman is director of chemical product R&D at Lilly Research Laboratories in Indianapolis. Kim Erland Vukovinsky* is director of nonlinical statistics at Pfizer, MS 82003150, Eastern Point Rd., Groton, CT 06340, tel. 860.715.0916, kim.e.vukovinsky@pfizer.com *To whom all correspondence should be addressed. Submitted: Jan. 12, 2010. Accepted: Jan. 27, 2010. References 1. S. Altan et al., Pharm. Technol. Part I, 34 (7) 66–70 (2010). 2. S. Altan et al., Pharm. Technol. Part II, 34 (8) 52–60 (2010). Additional reading 1. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, Q8(R1), Pharmaceutical Development, Step 5, November 2005 (core) and Annex to the Core Guideline, Step 5, November 2008. 2. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, Q9, Quality Risk Management, Step 4 , November 2005. 3. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, Q10, Pharmaceutical Quality System, Step 5, June 2008. 4. A Posterior Predictive Approach to Multiple Response Surface Optimization, John Peterson, 2004. 5. Potter C., et al..al.. A Guide to EFPIA #8217;s Mock P.2. Document, Pharm Tech 2006. 6. Glodek, M., Liebowitz, S, McCarthy, R., McNally, G., Oksanen, C., Schultz, T., Sundararajan, M., Vorkapich, R., Vukovinsky, K., Watts, C., and Millili, G. Process Robustness: A PQRI White Paper, Pharmaceutical Engineering, November/December 2006. 7. Box, G.E.P, W.G. Hunter, and J.S. Hunter (1978). Statistics for Experimenters: An Introduction to Design, Analysis and Model Building. John Wiley and Sons. 8. Montgomery, D.C. (2001).). Design and Analysis of Experiments. John Wiley and Sons. 9. Box, G.E.P.,and N. R. Draper (1969). Evolutionary Operation: A Statistical Method for Process Improvement. John Wiley and Sons. 10. Cox, D.R. (1992). Planning for Experiments. JohnWiley and Sons. 11. Cornell, J. (2002). Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data, 3rd Edition. John Wiley and Sons. 12. Duncan, A.J. (1974). Quality Control and Industrial Statistics, Richard D. Irwin, Inc., Homewood, IL. 13. Myers, R.H. and Montgomery, D.C. (2002).). Response Surface Methodology: Process and Product Optimization Using Designed Experiments. John Wiley and Sons. 14. Montgomery, D.C. (2001). Introduction to Statistical Quality Control, 4th Edition. John Wiley and Sons. 15. del Castillo, E. (2007).Process Optimization: A Statistical Approach. Springer. New Yor.k 16. Khuri, A. and Cornell, J. A. (1996.). Response Surfaces, 2nd Edition, MarcelDekker, New York. 17. MacGregor, J. F. and Bruwer, MJ. (2008). "A Framework for the Development of Design and Control Spaces", Journal of Pharmaceutical Innovation, 3, 1522. 18. Mir and#243;Quesada, G., del Castillo, E., and Peterson, J.J., (2004). "A Bayesian Approach for Multiple Response Surface Optimization in the Presence of Noise Variables", Journal of Applied Statistics, 31, 251270. 19. Peterson, J. J. (2004). "A Posterior Predictive Approach to Multiple Response Surface Optimization", Journal of Quality Technology, 36, 139153. 20. Peterson, J. J. (2008). "A Bayesian Approach to the ICH Q8 Definition of Design Space", Journal of Biopharmaceutical Statistics, 18, 958974.

