Q6a: How should the ranges for each factor be selected?
A: The ranges should be set in relation to:
- Feasibility at the commercial scale
- Basic knowledge of the product, including modeling
- If feasible, capability of laboratory simulation and modeling.
There may be other constraints related to the practical or physical limitations of the equipment. If the ranges are too wide,
the observed effect may be too large, thereby swamping the effects of other factors. Such an effect also could provide little
information about the region of interest or fall outside a linear region, thereby making modeling more complex or even leading
to immeasurable responses. If the ranges are too narrow, it may not be possible to explore the region of interest sufficiently,
or, the effect may be too small relative to the measurement variability to provide good model estimates, or in some cases,
the effect may not be observed. In addition, there may be some factors that cannot be simulated at a laboratory scale or do
not scale up well (e.g., mixing, feed rates). Those factors need to be identified and other possible ways to account for their
effect on the responses should be considered.
Q6b: Would it help to carry out preliminary runs before conducting a DoE?
A: Preliminary experiments might be useful for establishing experimental ranges. Also, the design may contain certain combinations
of factors that are not feasible to execute. The design should be thoroughly reviewed by a multidisciplinary team before execution.
It is a good practice to explore the factor-level combinations using a risk-based approach before embarking on the traditional
random order used for executing the design. Sometimes, performing one or two experimental runs representing the extreme of
the design can provide key information not only about the process but also about the appropriate use of resources. If the
process does not perform for these settings, it may be prudent to change some of the factor ranges and redesign the study
to ensure that informative response data can be obtained from each trial.
Q7: Is it a better to run one big DoE study or several small DoE studies to determine a design space?
A: Running several small experiments versus one large experimental study depends on, but is not limited to, the following:
- The purpose of the study
- The availability of raw materials and other resources
- The amount of available prior information (data mining, historical information, or one-factor-at-a-time studies) or basic
- The amount of time it takes to perform the study (including set-up and runs).
Manufacturing processes generally consist of unit operations, each of which contains several factors to evaluate. Each operation
could be analyzed separately, or a single design could be used to study several unit operations (see Question 15 in Part II
of this series).
There are numerous pros and cons to consider when deciding which approach to take. For example, one advantage of conducting
a larger experiment is that interactions between more factors can be evaluated. It is possible to create a design in which
factors that are expected to interact with one another are included in the same DoE and factors that do not interact with
each other are included in another smaller design. On the other hand, if something goes wrong during the experiment, a smaller
study approach could save resources. Smaller designs are also useful when final ranges for the factors have not been determined.
If the factor levels are too wide and a large experiment is performed, there is an increased risk that many of the experiments
could fail. However, running a small experiment usually requires the factors not included in the design to be fixed at a specific
level. Therefore, any interaction between these factors and those in the experiment cannot be examined.
*Factor is synonymous with andquot;x, andquot; input, variable. A process parameter can be a factor as can an input material.
For simplicity and consistency, factor will be used throughout the paper.
and#8224;Response is synonymous with andquot;y andquot; and output. Here, response is either the critical quality attribute
(CQA) or the surrogate for the CQA. For consistency response will be used throughout the paper.
1. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use,
Q8(R1), Pharmaceutical Development, Step 5, November 2005 (core) and Annex to the Core Guideline, Step 5, November 2008.
2. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use,
Q9, Quality Risk Management, Step 4 , November 2005.
3. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use,
Q10, Pharmaceutical Quality System, Step 5, June 2008.
4. A Posterior Predictive Approach to Multiple Response Surface Optimization, John Peterson, 2004.
5. Potter C., et al..al.. A Guide to EFPIA and#8217;s Mock P.2. DocumentDocument, Pharm Tech 2006.
6. Glodek, M., Liebowitz, S, McCarthy, R., McNally, G., Oksanen, C., Schultz, T., Sundararajan, M., Vorkapich, R., Vukovinsky,
K., Watts, C., and Millili, G. Process Robustness: A PQRI White Paper, Pharmaceutical Engineering, November/December 2006.
7. Box, G.E.P, W.G. Hunter, and J.S. Hunter (1978). Statistics for Experimenters: An Introduction to Design, Analysis and
Model Building. John Wiley and Sons.
8. Montgomery, D.C. (2001).). Design and Analysis of Experiments. John Wiley and Sons.
9. Box, G.E.P.,and N. R. Draper (1969). Evolutionary Operation: A Statistical Method for Process Improvement. John Wiley
10. Cox, D.R. (1992). Planning for Experiments. John-Wiley and Sons.
11. Cornell, J. (2002). Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data, 3rd Edition. John Wiley
12. Duncan, A.J. (1974). Quality Control and Industrial Statistics, Richard D. Irwin, Inc., Homewood, IL.
13. Myers, R.H. and Montgomery, D.C. (2002).). Response Surface Methodology: Process and Product Optimization Using Designed
Experiments. John Wiley and Sons.
14. Montgomery, D.C. (2001). Introduction to Statistical Quality Control, 4th Edition. John Wiley and Sons.
15. del Castillo, E. (2007).Process Optimization: A Statistical Approach. Springer. New Yor.k
16. Khuri, A. and Cornell, J. A. (1996.). Response Surfaces, 2nd Edition, Marcel-Dekker, New York.
17. MacGregor, J. F. and Bruwer, M-J. (2008). "A Framework for the Development of Design and Control Spaces", Journal of Pharmaceutical
Innovation, 3, 15-22.
18. Mir and#243;-Quesada, G., del Castillo, E., and Peterson, J.J., (2004). "A Bayesian Approach for Multiple Response Surface
Optimization in the Presence of Noise Variables", Journal of Applied Statistics, 31, 251-270.
19. Peterson, J. J. (2004). "A Posterior Predictive Approach to Multiple Response Surface Optimization", Journal of Quality
Technology, 36, 139-153.
20. Peterson, J. J. (2008). "A Bayesian Approach to the ICH Q8 Definition of Design Space", Journal of Biopharmaceutical Statistics,
21. Stockdale, G. and Cheng, A. (2009). "Finding Design Space and a Reliable Operating Region using a Multivariate Bayesian
Approach with Experimental Design", Quality Technology and Quantitative Management (in press).
The authors wish to thank Raymond Buck, statistical consultant; Rick Burdick, Amgen; Dave Christopher, Schering-Plough; Peter
Lindskoug, AstraZeneca; Tim Schofield and Greg Stockdale, GSK; and Ed Warner, Schering-Plough, for their advice and assistance
with this article.
Stan Altan is a senior research fellow at Johnson andamp; Johnson Pharmaceutical R andamp;D in Raritan, NJ. James Bergum is associate director of nonclinical biostatistics at Bristol-Myers Squibb Company in New Brunswick, NJ. Lori Pfahler is associate director, and Edith Senderak is associate director/scientific staff, both at Merck and Co. in West Point, PA. Shanthi Sethuraman is director of chemical product research and development at Lilly Research Laboratories in Indianapolis. Kim Erland Vukovinsky* is director of nonlinical statistics at Pfizer, MS 8200-3150, Eastern Point Rd., Groton, CT 06340, tel. 860.715.0916, email@example.com
. At the time of this writing, all authors were members of the Pharmaceutical Research and Manufacturers of America (PhRMA)
Chemistry, Manufacturing, and Controls Statistics Experts Team (SET).
*To whom all correspondence should be addressed.
Submitted: Jan. 12, 2010. Accepted: Jan. 27, 2010.
See Part II of this article series
See Part III of this article series