Evolving approaches to data management
As noted, the traditional approach to data management involves manually collecting data from storage sources across manufacturing
networks and supply chains, including paper batch records. Data is used, for example, to produce a weekly trends report for
each product. Using Excel, for example, to compile and organize large amounts of data that accounts for important process
variables, such as raw materials and in-process parameters organized in batch and genealogy context, can result in a state
of "spreadsheet madness" and its related disease, "data warehouse madness," especially when data and reports need to be collected
months or years later, when it is time to prepare a regulatory filing.
Process intelligence, by the previously stated definition, has a lot in common with quality-by-design (QbD) principals. It
requires that product and process performance characteristics be scientifically designed to meet specific objectives, not
merely empirically derived from the performance of test batches. Control of quality is designed into the process using scientific
process understanding, so the desired outcomes are achieved reproducibly despite variability in process inputs.
According to FDA guidance, QbD is achieved in a manufacturing process when all: critical sources of variability are identified
and explained; variability is minimized and managed by the process (instead of by specifications and release criteria at the
end of the process); and product quality attributes are reliably predictable (2).
The business benefits of QbD include measurable supply-chain improvements from raw materials to end product. With variability
in a process, for example, excess inventory must be stored to ensure that a manufacturer has plenty of raw materials available
and final products in stock to meet customer demand in case an unpredicted event occurs in the process that threatens a stock-out.
When process variability is reduced, there is a corresponding reduction in raw-material inventory required at the start of
the process and less stockpiled inventory at the end of the supply chain. The result is less cost—and significant gains. A
McKinsey report from 2010 looked at the business benefits of QbD components (3). The report notes that lower cost of goods
sold through greater supply chain reliability and predictability accounts for $15–25 of the total $20–30 billion of increased
profit to the industry expected when QbD is fully implemented (3).
As an effective path to QbD with known, minimized process variability, process intelligence must be used to go beyond
the way technology transfer has traditionally been done, and thereby eliminate the "madness." It requires a new approach with
improved collaboration using scientific process understanding that leads to reduced risks and more successful tech transfer
(4). The industry has a long history of underestimating the complexity, level of difficulty, and time required for successful
technology transfer (4).
As Figure 1 illustrates, successful technology transfer requires collaboration among local and contract process development teams, manufacturing,
and quality operations to produce safe and efficacious products. Successful technology transfer has many facets: science-based
procedures and specifications, clear process descriptions and protocols, robust assays and methods transfer, effective vendor
selection, risk management, solid contracts, project management, training and communication, and so forth.
Figure 1: Successful technology transfer requires collaboration among various teams.
Another crucial requirement for the new process intelligence approach relies on leveraging technology, specifically using
a self-service platform for data access, contextualization, analysis, and reporting to reduce risks and improve collaboration
and regulatory compliance. A process-intelligence platform can provide a "layer" above all the relevant data sources that
makes things simpler by allowing data to be accessed from a desktop view that fits the way users naturally think of where
their data comes from. This is a key requirement for productive data analysis and more intensive, real-time collaboration.
This approach allows teams to collaborate at a much deeper level than normally seen within organizations or between sponsors
and CMOs. All parties benefit from a two-way, interactive platform where expertise can be applied for technology transfer,
process support, and risk reduction using all relevant process and quality data.
Whether using data for modeling to predict process outcomes or examining data to investigate batch or supply chain problems,
teams save time compared with digging through spreadsheets, and leverage expertise across sites. A platform approach provides
automated data contextualization for observational and investigational analytics, along with access to all types of data,
and delivers value for non-programmers and non-statisticians who need to collaborate with their more analytics-savvy team
Such an approach can be institutionalized from the beginning at smaller start-up companies or retrofitted in larger organizations
and CMOs. The checklist for a supporting process-intelligence platform includes the following criteria:
- Self-service access to data from multiple disparate sources
- Flexible, accurate capture of data from paper records
- Automatic contextualization for specific types of analysis
- Working with continuous and discrete data together
- Domain-specific observational and investigational analytics
- Automated analysis and reporting (e.g., batch reports, APRs).