Small-Scale Modeling Boosts Downstream Development

Published on: 

PTSM: Pharmaceutical Technology Sourcing and Management

PTSM: Pharmaceutical Technology Sourcing and Management-03-02-2016, Volume 11, Issue 3

Scale-down modeling accelerates the development of downstream biopharma manufacturing processes.

Biopharmaceutical manufacturing involves a series of complex unit operations linked together to provide high-purity, biologic actives with specified physicochemical and pharmacokinetic properties. The development of commercial-scale processes that consistently provide high-quality product that meets those specifications requires extensive examination of numerous process variables and potential process variations. Conducting the required number of studies on the commercial scale is not practical; scale-down models are therefore employed to determine optimum conditions for downstream separation processes, including chromatography and viral clearance steps. Qualification that such models provide results truly representative of commercial-scale processes becomes necessary as molecules move closer to commercialization.

What counts as a scale-down model?
A scale-down model is a bench-scale process designed to predict the results that will be obtained when the process is run at commercial scale. Unlike initial processes that are used during the discovery and early development phases to produce material for characterization, efficacy, safety, and other studies, scale-down models are intended to aid in optimization and characterization of the process that will be implemented for manufacture of large quantities of a biologic once the product has been approved.

While process simulations are an ideal surrogate, given the complexity and inaccuracy of many theoretical models, there often remains no substitute for a physical scaled-down model, according to Adam J. Meizinger, a process engineer in Genzyme’s Purification, Manufacturing Science, and Technology Laboratory.

Scale-down models are used in the development of nearly every unit operation in the overall biopharmaceutical manufacturing process, including both upstream and downstream steps, and separation and filtrations steps, such as chromatography and viral filtration processes. A scaled-down chromatography process, for instance, is designed considering the residence time, flow rate, and column bed height of the large-scale process. Additional parameters that are considered include buffer preparation and feed stream. “The ideal model behaves identically to the scaled-up process such that the results obtained in the lab exactly predict the results that will be obtained in the manufacturing plant,” says Paul Jorjorian, director of global technology transfer for Patheon. Because they are models, however, such a situation does not occur. The goal, therefore, is to mimic the commercial-scale process as closely as possible.

The impact of QbD and DoE
The quality-by-design (QbD) approach to process development requires characterization of the impacts of different process parameters on product critical quality attributes (CQAs), which in turn necessitates considerable experimentation, according to Meizinger. “While the consideration of up-front risk assessments, access to general scientific understanding, and the use of design-of-experiment (DoE) methodology minimizes the required number of runs and maximizes the informational output, it is impractical to conduct such characterization experiments at full scale. Qualified scaled-down models are instrumental for conducting such characterization work, as well as for viral clearance studies, continuous process improvement, and for conducting impromptu investigations in support of large-scale operations, such as when deviations occur,” he adds.

Dealing with imperfection
While ideal scaled-down models correlate exactly with their large-scale counterpart processes, generally a perfect match is not achieved between the bench and commercial scales, and therefore, model processes are not perfectly scalable. The causes, according to Jorjorian, may not relate directly to the process, but can be attributed to small, nuanced differences that still have an impact on the results at different scales. “The issues are often unavoidable and thus must be addressed by developing correction factors and/or transfer functions that enable the accurate prediction of the behavior of manufacturing processes,” explains Jorjorian. “It is important, however, to minimize the need for correction factors and transfer functions, and for those that are required, provide thorough justification and clearly demonstrate their suitability,” he adds.

Another aspect of scale-down models that can create issues when predicting large-scale behavior is the fact that models are qualified at the center point of the process design space. “This approach can lead to discrepancies in predicted and actual results when the model is applied to the entire design space,” Jorjorian observes. He adds that timing and planning of modeling runs is also important and can negatively influence results if not done effectively. For example, it is important to use the same feed used for production runs in scale-down modeling runs; therefore, modeling runs must be scheduled when the feed is available. In many cases, that isn’t possible, and the feed may need to be frozen, which introduces a new variable that can impact the reliability of the model.

Specifically for chromatography operations, some key considerations in developing scaled-down models are minimizing the differences in the extra-column volume, which varies considerably with scale, contending with increasing wall effects inherent in small-diameter chromatography columns, and accounting for differences in detector-specific path lengths and the associated non-linearity of detector output, according to Meizinger. He also notes that some steps (e.g., gas overlays for intermediate holds) may be challenging to implement as part of scaled-down model development, and mixing dynamics and heat transfer often remain different between scales. “The impact of these differences must be carefully assessed for their resulting impact on scaled-down model performance,” he says.

Advertisement

Scale-down modeling of membrane chromatography processes can also be difficult, according to Jorjorian. The reason often relates to the designs used for small- and large-scale membranes; they are often different, with large-scale membranes having added flutes or pleats or other features that affect their performance such that it is difficult to develop bench-scale processes that can reliably predict production-scale outcomes.

 

 

Qualification at later stages
There is nothing new about the use of bench-scale process models during the development of biopharmaceutical manufacturing processes. What has become an important trend in process development is the need to demonstrate that scaled-down models truly represent their large-scale counterparts. “It has only recently become standard practice to demonstrate that scaled-down models provide results that are statistically equivalent to those obtained from commercial-scale versions of the same processes,” states Meizinger.

Often during earlier stages of development, even through Phase I clinical trials, scale-down models of downstream biopharmaceutical processes are generally not formal, qualified models. “For these models, the basis for use of each model and the rationale for why it is expected to be representative of the process at large scale are documented,” notes Jorjorian. As a molecule progresses toward commercialization, however, formal, qualified models are required. The ability of the model to predict results at large-scale must be demonstrated by performing at least triplicate runs. The use of any correction factors and/or transfer functions must also be justified and their application demonstrated to provide robust results.

Qualification of most models for chromatographic separations is fairly straightforward, according to Jorjorian. “There is a lot of documentation and analyses to complete, so in essence, qualification of scale-down models is often an exercise in coordination, rather than presenting a substantial technical challenge,” he observes. Meizinger agrees that the use of statistical equivalence for testing quality attributes and process performance indicators between scales is generally not an issue. “Demonstration of both individual unit and linked unit output (recovery, product quality attributes, product purity, chromatographic profiles/transition analysis, etc.) comparability between small and large scales remains both practical and accepted,” he states.

Moving to high-throughput models
Qualification of scale-down models can be challenging, however, when very small column sizes are involved. There is significant interest in using high-throughput approaches for scale-down modeling to speed up development times. There are, however, technical issues that arise due to the different behaviors observed in very large and very small columns, such as the wall effects mentioned previously. 

“It is definitely more challenging to establish robust scale-down models using high-throughput methodologies because typically transfer functions and correction factors are required. As a result, it is more difficult to adequately demonstrate the scalability of the models to the level required for qualification,” Jorjorian says. He adds that there is a lot of effort being devoted to overcoming these issues to enable high-throughput modeling of downstream bioprocesses.

A few groups are also investigating the application of first principles calculations to the scale-down modeling of both upstream and downstream unit operations, and particularly for chromatography, for which there is significant understanding of separation behaviors and existing mathematical models. “This approach to scale-down modeling is in the earliest phases and largely limited to large pharmaceutical companies and university research groups with the resources and computing power required,” Jorjorian says.

Existing technologyis highly valuable
“Despite the limitations of existing scale-down modeling methods, the use of scaled-down models is recommended wherever possible, including for process characterization, viral clearance studies, continuous improvement, manufacturing-related investigations, evaluation of changes in raw materials, and so on, barring only validation efforts and clinical trial/commercial manufacturing requirements for performance of large-scale runs,” asserts Meizinger. “Today, scaled-down models serve as efficient and cost-effective representations of full-scale processes. While the inability to demonstrate equivalence between scaled-down and large-scale operations is often inevitable for some metrics, further understanding of these scale-related differences, particularly through the development/refinement of associated engineering models, promises to continuously improve the accuracy and utility of scaled-down models,” he states.