Model-Based Simulation: Getting Comfortable with Randomness

Published on: 

Twenty-first century drug development challenges sponsors and contract partners to collaborate more closely, using tools such as model-based simulation.

Monte Carlo and other advanced modeling techniques are being used in process and product optimization, but also in precision medicine. As the pharmaceutical industry continues to face increasing competitive pressures on its pipeline, new approaches are needed to take random factors into account and to make development more systematic. The traditional approach to treating patients with a “one-dose-fits-all” method only works for certain diseases.

How can the pharmaceutical industry reduce the cost of healthcare while still developing innovative treatments that help patients improve quality of life? These are questions that pharmaceutical manufacturing companies and their contract development and manufacturing company (CDMO) partners must ponder and resolve.

While new technologies such as 3-D printing and the field of personalized medicine promise to transform drug development in the future, modeling and computer simulation are playing a more prominent role in more traditional drug development. In the future, such methods are likely to be routinely applied to personalized medicine and even to new areas such as digital health. Both pharmaceutical company sponsors and their contract partners will need to become more comfortable collaborating on projects that use this basic tool. This article touches briefly on applications of model-based simulation, not only in small-molecule drug development but in personalized medicine, to underscore its potential importance.

The Monte Carlo method

Computer simulations can be used to simulate real systems, using variables to represent key numerical measurements as inputs and outputs for the system. Formulas, programming statements, or other protocols are then used to express the mathematical relationships between the inputs and outputs. When the simulation deals with uncertainty, the model will include uncertain variables-variables whose values are not under one’s control-as well as decision variables or parameters that one can control. The uncertain variables are represented by random numbers. 

The Monte Carlo method is one of the more commonly used methods for working with systems that involve uncertainty. First developed by scientists in the 1940s while they were working on the atom bomb, it is named after the Monaco resort town renowned for its casinos. The method has been used to model a variety of physical and conceptual systems and is finding increased use in pharma to optimize process and product development.

Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values-a probability distribution-for any factor that has inherent uncertainty. It then performs a series of calculations multiple times, each time using a different set of random parameters from probability functions.

Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before its completion. Monte Carlo simulation produces a distribution of possible outcome values. By using probability distributions, variables can have a range of outcomes based on the different parameters. Probability distributions offer the pharmaceutical industry a much more realistic way of describing uncertainty in variables for risk analysis.

The method has been used with process analytical technologies (PAT) and within the framework of pharmaceutical quality by design (QbD) to study content and blend uniformity, a crucial quality control measurement that shows variation in the amount of API in individual units within the same batch. Using a screening model for design of experiments (DoE), these studies are correlated measurements that might have seemed unrelated, such as API particle size, excipient particle size, and interactions among other parameters in raw materials such as moisture content (1).  In fact, advanced modeling promises to become a crucial part of drug development in the future (Figure 1).

Patheon Pharmaceutical’s Quality Analytics group, a division of Thermo Fisher Scientific, is working with Monte Carlo methods on small-molecule drug challenges, such as optimizing excipient particle size and other product parameters, based on process parameters such as roller force, mill speed, and screen size (2). It is also applying the concept to biologics. 

The method is expected to help the industry reach the goal of real-time release testing for solid dosage forms.  Currently, the company is collaborating with Rutgers and other partners on projects in this area.  Patheon has also introduced QbD training, including training in computer simulation, at nine sites and has completed five client projects using the concepts.  In addition, nine other projects are now underway, and another four are being negotiated with clients (2).

In one such project, Monte Carlo modeling was used to optimize the values of input factors to meet target ranges required for response factors.  Software tools used included Minitab 17 for DoE analysis, Crystal Ball 11.1.2.3 for Monte Carlo simulation and sensitivity analysis, and Microsoft Solver 2013 for optimization.

First, risk assessment methods were applied to all potential input variables, and three were found to be critical to desired response variables: rolle speed, roller force, and screen size. The following particle sizes were determined to be optimum for each of those variables:

  • Roller speed: <125 μm

  • Roller force: >500 μm

Advertisement
  • Screen size:  Particle Size >750 μm.

The common pre-blend batch was roller compacted based on a 23 full factorial design, involving three independent variables and two levels tested for each variable, resulting in 11 runs.

All the DoE batches were blended and subjected to physical tests, including particle size distribution (PSD); bulk and tap density, flow, and moisture [tests?]; and chemical tests for blend uniformity.

Physical tests were then run on compacted milled granules and the final blend, and chemical testing was run on the final blend. Nuclear magnetic resonance (NMR) spectroscopy and particle size analysis were also used to test for amorphization in the final blend.

The blend was found to be discharged easily from the V-shell for the final blends manufactured with 1.0-mm screen and 6 kN (kilo-Newtons) force and 5-rpm speed. The blend was also found to be discharged easily from V-shell for the final blends manufactured with a 1.5-mm screen and 9.0-kN force and 3- or 7-rpm speed. For all other batches, the blend did not easily discharge, as the blender was partially blocked with particles, leaving a narrow opening or “rat hole”.

The empirical equation was extracted from the DoE analysis. Monte Carlo Simulation was applied to that empirical equation as shown in Figure 2. The sensitivity analysis, shown in Figure 3, was performed to change the predictive modeling to forecasting modeling. Microsoft Solver was used to optimize the target values of response factors.

Although this example focuses on small-molecule drug manufacturing issues, it is not unreasonable to expect Monte Carlo and other methods of model-based simulation to be used in very different areas of pharmaceutical development in the future.  One such potential area is in personalized medicine (3).

Personalized, or precision, medicine separates patients into different groups, with medical decisions, practices, interventions, and/or products tailored to the individual patient based on his or her predicted response or risk of disease.

 

Managing risk in random systems

Computer simulation modeling has created an opportunity for significant progress in clinical medicine. Researchers have discovered thousands of genes that harbor variations contributing to human illness, identified genetic variability in patients' responses to dozens of treatments, and begun to target the molecular causes of some diseases. Additionally, scientists are developing and using predictive tests based on genetics or other molecular mechanisms to better predict patients' responses to targeted drug therapy.

The challenge now is to deliver these benefits to the patients. This will require optimizing the development of these new therapies and develop better prescribing methods to steer patients to the right drug at the right dose at the right time. Before the next step can be taken, however, there are still many obstacles to overcome. These challenges include scientific questions, such as determining which genetic markers have the most clinical significance, limiting the off-target effects of gene-based therapies, and conducting clinical studies to identify genetic variants that are correlated with drug response.

Risk-based approaches are needed for appropriate review of diagnostic testing to more accurately assess the validity and clinical utility, and make information about tests more readily available. Computer simulation modeling could play a vital role for developing such risk-based approaches.

Patients should be confident that diagnostic tests reliably give correct results-especially when test results are used in making major medical decisions. FDA has long taken a risk-based approach to the oversight of diagnostic tests, historically focusing on test kits that are broadly marketed to laboratories or to the public (e.g., pregnancy tests or blood glucose tests); such kits are sold only if FDA has determined that they accurately provide clinically significant information.

But recently, many laboratories have begun performing and broadly marketing laboratory-developed tests, including complicated genetic tests. The results of these tests can be quite challenging to interpret, resulting in delayed management or inaccurate drug dosing. Because clinicians may order a genetic test only once, getting the results right the first time is crucial.

Applications in personalized medicine

Hundreds of genome wide association studies (GWAS) have been published over the past few years, identifying hundreds of thousands of common variants that confer only a very small to modest alteration in disease risk, most commonly for complex diseases.

These findings have led to the development of genotyping panels for testing for many of these variants at the same time. The single nucleotide polymorphism (SNP) content and diseases covered vary from panel to panel, although there is overlap among the most commonly studied variants. The predictive accuracy of these panels is highly variable. The clinical utility of these tests is not well-established and remains unclear at present. This represents another area where computer simiulation could help.

Applying Monte Carlo and other modeling methods to personalized medicine is still in very early stages. Ongoing research, however, is proving the validity of these methods. Although the variables being studied differ greatly from those used in small-molecule pharmaceutical process optimization, the general statistical concepts are the same, suggesting that pharmaceutical manufacturers and CDMOs explore this area in greater depth, and begin to expose staff to the concepts, eventually moving to train and recruit experts with the right background.

A number of studies in Europe (4) have been applying Monte Carlo and other methods for predictive modeling, using programs such as ModCell to determine the potential impact of specific therapies on individual patients.

For an example of what computer-based modeling might do for personalized medicine, consider research conducted at the University of Iowa. Researchers at the university teamed up with private companies including CellWorks Group, a company working on the development of virtual tumors for personalized cancer treatment, to use modeling to help predict how individual patients might respond to different drug treatments (5).

The researchers used computer simulation to create "virtual tumors" based on a patient's own cancer cells and specific genes. These virtual tumors were then used to see how effectively each drug treatment could address cancer-cell-induced immune suppression, allowing researchers to zero in on the optimal treatment types for each patient’s cancer.

After simulations were completed, researchers replicated the process in the lab by growing live cancer cells with the same genetic makeup as the virtual cells. They then tested various immunotherapies on the cells and monitored responses. If results from a specific treatment were reproducible from simulation to lab, they determined that the treatment should be effective for that particular type of patient.

The simulation and lab models have shown promise for the screening of combination treatments, which could involve multiple immunotherapeutic agents or a combination of immunotherapy and chemotherapy. Efforts such as these suggest that model-based simulation will play a pervasive and important role in the development of future therapies.  If a company and its contract partners have not been exposed to these concepts, it is time to evaluate them for future projects.

Acknowledgements

Sincere thanks to Michael Goedecke, PhD, Product Development Specialist; Jonathan Goerk, M.E.M., Real Time Release Process Engineer; Yongzhi Dong, PhD, Sr. PAT Scientist; Rupninder Sandhu, MBBS, PhD, Senior Scientist Mechanistic Understanding; and Daljinder Bhangoo, BS, Data Analytics Specialist, Global Quality Analytics at Thermo Fisher Scientific for their review, suggestions, and comments.  Thanks also to Andrew McNicoll, VP of Compliance and Quality Systems and Raul Cardona, Senior VP Quality of PSG of Thermo Fisher Scientific for providing me all the help and cooperation to write this article.

References

1. B. Gujral et al., “Monte Carlo Simulation for Risk Analysis and Pharmaceutical Product Design,” Proceedings of the 2007 Crystal Ball Users Conference, 2007.
2. B. Gujral, “White Paper on QbD at Patheon,” November 2015.
3. K. Feng, “Can Modeling and Simulation Inform Personalized Medicine?” certara.com
4. L. Ogilvie et al, Cancer Informatics, 14 (Supplement 4), pp 95-103 (2015).
5. S. Diedrich, “Making Cancer Care Personal,” Iowa Now, December 7, 2015.