Applying QbD in Process Development

Informatics software can be used to address the challenges of quality by design, such as managing impurity data when developing an impurity control strategy.
Sep 01, 2017
Volume 2017 Supplement, Issue 4, pg s28–s30, s34

venimo/shutterstock.comOver the past few years, global regulatory authorities have been raising the expectation of incorporating quality by design (QbD) into pharmaceutical development. While QbD offers many important long-term benefits, these expectations are having a dramatic impact on product development groups and their supporting corporate informatics infrastructure. This article discusses how QbD requirements for risk assessment, process assessment, material assessment, documentation, and traceability can be addressed with informatics, using development of an impurity control strategy as an example.

QbD in process development

One of the major impacts of using QbD principles in process development is the requirement to establish an acceptable quality target product profile (QTPP). Establishing the QTPP is accomplished through:

  • Evaluation of input material quality attributes (MQA)
  • Evaluation of the quality impact of critical process parameters (CPP)
  • Consolidated evaluation of every MQA and CPP for all input materials and unit operations.

MQA assessment requires the careful consideration of input materials to ensure that their physical/(bio)chemical properties or characteristics are within appropriate limits, ranges, or distributions. Furthermore, for CPP assessment, unit operation process parameter ranges must be evaluated to determine the impact of parameter variability on product quality. The contribution of each unit operation in any pharmaceutical or biopharmaceutical manufacturing process—whether it be synthetic steps in a chemical process (e.g., filtering, stirring, agitating, heating, chilling) or product formulation (e.g., impurity control) must be assessed.

Challenges in impurity control

Impurity control strategy development is an example of this iterative evaluation process. For regulatory submission of a substance or product under development, information from many activities is necessary to complete the quality module of a common technical document (CTD or eCTD).

Initially, chemical structure information may be available from chemists’ individual electronic laboratory notebooks, but the affiliated unit operation details and the complete supporting molecular characterization data are not usually directly available. Moreover, some of that data and interpreted information may have been transcribed into Microsoft Excel spreadsheets. In the authors’ informal survey of pharmaceutical development groups, the majority of project groups were found to be managing these data with Microsoft Excel. In those Excel spreadsheets, synthetic process and supporting analytical and chromatographic data are abstracted to numbers, text, and images, and the raw data are stored in archives. Project team members spend many hours transcribing complex and repetitive data from various systems into complex spreadsheets in an effort to harmonize the most necessary information into a single environment. Because Excel was not designed to handle chemical structures and associated scientific data, separate reports are still needed to assemble subsets of analytical characterization information and interpretations. The analytical information is transposed for decision-making purposes, but a review of the decision-supporting data is, at best, impractical because it has been sequestered into different systems. Batch-to-batch comparison data are also transcribed into the same spreadsheets in an attempt to create a central repository of information. Project teams spend weeks on the assembly of this information for internal reporting and external submissions. This abstracted and repeatedly transcribed information is then reviewed to establish and implement control strategies in compliance with a QbD approach.

The challenge for product development project teams is to not only plan and conduct the process experiments unit operations, but to acquire, analyze, and then most importantly, to assemble and interpret the various data from analysis of input materials and process information. Since the development process is iterative, all the salient data must be captured and dynamically consolidated as process operations are conducted to enable facile review of the information for ongoing risk assessment of impurities.

Concurrently, test method development must be performed to demonstrate robust capability for detection of the complete impurity profile, which includes any significant known or potential impurities from each process. Currently, control strategies rely on unrelated instruments and systems to acquire, analyze, and summarize impurity profile data and interpretations made during process route development and optimization.

Collating information for an impurity control strategy

To establish effective process and analytical impurity control strategies, a comprehensive set of information must be collated. One of the biggest challenges is that the relevant types of information, data, and knowledge required exist in disparate systems and formats. These data include:

  • Chemical or biological substance information: chemical structures or sequence information
  • Process information: unit operation conditions, materials used, location, equipment information, operator identification, suitable references to operating procedures and training, and calibration records
  • Unit-operation-specific molecular composition characterization data: spectral and chromatographic data collected to identify and characterize compounds and mixtures (hyphenated liquid chromatography techniques, mass spectrometry, nuclear magnetic resonance, and optical techniques)
  • Composition differences in materials between specific unit operations and across all unit operations
  • Comparative information for each batch for a single “process”, which is a single set of unit operations that are employed to produce a product or substance
  • Comparative information for any or all employed processes.

To effectively establish control strategies in accordance with these collation requirements, users need the ability to simply aggregate all of the information, data, and knowledge in a single, integrated, and interoperable platform. Moreover, such informatics platforms should allow for direct data integration: the data streaming from their respective sources are automatically imported, processed, interpreted, stored, and made readily accessible. These integrations to the platform should optimally be conducted with as little human intervention as possible. Data sources relevant to impurity control strategies include analytical instrumentation, electronic laboratory notebooks, substance registry/inventory management systems, and laboratory information management systems (LIMS), particularly the work request portion of LIMS functionality.

Within the informatics platforms, users must be able to search, review, and update the information on a continual basis as projects progress and evolve. Data access should support sharing of data for collaborative research while protecting data integrity.

Using process maps

Additionally, informatics platforms should optimally provide users with the ability to construct process maps that allow visual comparison of molecular composition across unit operations. This visual comparison allows users to rapidly identify where in a process appropriate impurity control measures need to be put in place to assure effective and efficient process control. The platform should also allow the user to visualize the wide variety of related spectroscopic and chromatographic data in a single environment for each stage and substance. Such analytical data allows users to visually confirm the veracity of numerical or textual interpretations or processed results without having to open separate applications. The platform should also store the context of the experiment, expert interpretation, and decisions resulting from it. Dynamic visualization of this assembled and aggregated information preserves data integrity while supporting decision-making. Some examples of decisions that can be made more efficiently using informatics include:

  • Risk assessment conclusions pertaining to impurity onset, fate, and purge
  • Comparative assessments of different purification
  • methods
  • Comparative assessments of different control strategies.

Preserving context

To preserve the rich scientific information stored in the database, an informatics system should limit the need for data abstraction. Data abstraction in analytical chemistry is the process whereby spectral and chromatographic data are reduced from interactive data to images, text, and numbers that describe and summarize results. Although data abstraction serves a purpose--reduction of voluminous data to pieces of knowledge--it also brings limitations because important details, knowledge, and contextual information can be lost. An example of data abstraction, and its inherent risks, is identity specification and testing. One of the classic standards of identity is that a spectrum obtained for any batch of a substance matches the reference standard of the same substance. The specification can at times be limited to, for example, a set of “diagnostic” spectral features. As such, these spectral features can be abstracted to a discrete set of numerical values. Such abstraction, however, presents some discrete risk. For example, should unanticipated spectral features not accounted for in the specification be present in a spectrum for an adulterated substance, that substance would pass an identity test (i.e., expected peaks are found and the substance passes while unexpected peaks representing a new impurity are not accounted for).

Informatics strategies

Cross-functional development project teams, comprised of representatives from the various groups and departments that generate the data and make decisions from it, spend many hours sourcing and assembling the necessary information for impurity control. For each process iteration, effort is needed to acquire and evaluate new data, to perform new interpretations and identifications, and eventually to reassess the variance in impurity profiles for each process and across all processes. One of the major challenges is that while spreadsheets tend to be quite effective for handling numbers and relating them in certain specified ways to calculate values (e.g., sums and averages) and make simple graphs, they are not effective at handling and relating chemical structures with the analytical spectra and chromatograms used to identify them. An informatics system dynamically relates chemical and analytical data, results information including variances, and the interpretation knowledge from both computer algorithms and scientists, to improve productivity.


Many hours are often spent sourcing and assembling the necessary information for impurity control. Spreadsheets, though useful for handling mathematical and statistical data, are not capable of relating chemical structures with the analytical spectra and chromatograms used to identify them. An informatics system, on the other hand, can relate chemical and analytical data, including variances, and interpretation knowledge.

Article Details

Pharmaceutical Technology
Supplement: APIs, Excipients, and Manufacturing
Vol. 41
September 2017
Pages: s28–s30, s34


When referring to this article, please cite it as A. Anderson, G. McGibbon, and S. Bhal, "Applying QbD in Process Development," Pharmaceutical Technology APIs, Excipients, and Manufacturing Supplement (September 2017).

About the Author

Andrew Anderson is vice-president of Innovation and Informatics Strategy; Graham A. McGibbon is director of Strategic Partnerships; and Sanjivanjit K. Bhal is director of Marketing and Communications, all at ACD/Labs.



lorem ipsum