Is There Such a Thing as a Best-in-Class Lab? Benchmarking of QC Operations

,

Pharmaceutical Technology Europe

Pharmaceutical Technology Europe, Pharmaceutical Technology Europe-09-01-2004, Volume 16, Issue 9

This article investigates what defines a best-in-class QC lab based on experience of implementing operational improvement projects in world-class labs. It includes an assessment of a benchmarking process, a case study of improvements made as a result at one company and findings on what constitutes best-in-class for QC labs.

Changes are afoot in the life science sector; the pressure is on to retain full regulatory compliance, provide faultless customer service and to reduce costs (by using leaner processes and dramatically reducing cycle times). Nowhere in the organization is this beginning to be felt more keenly than in quality control (QC) and research and development (R&D) operations. In many organizations it also necessitates a major cultural shift.

The hub of QC and R&D operations are the laboratories, and the challenge is to maximize the testing activities of the analysts who are often highly qualified, difficult to recruit, difficult to retain and essential to the process. Maximizing this resource is the key to breakthrough performance, and the solutions to achieving this lie within the people themselves and their management and support systems. Many companies also have to deal with rapid expansion, variable and unpredictable peaks in workload and constraining registered testing procedures, as well as persistent pressure to reduce costs.

Figure 1 Staffing distribution.

Visitors to a lab will usually notice how little actual testing appears to go on, despite the operations being under apparently heavy workload (often with a growing backlog of batches to test). Understanding the operational aspects of QC can be complex; there is little product to actually see; the volumes are low; the bill-of-testing is often complex; the opportunities for campaigning are complex; and there are numerous systems (compliance, deviations, retesting and paperwork flow) involved in the release of results.

Introduction

At the beginning of 2002, Tefen launched a benchmarking survey of QC labs across the biopharm sector using a range of detailed operational metrics. During the last few years, Tefen has used this database along with the benchmarking process as an integral part of its QC labs diagnostic studies and lean labs implementation programmes. The process has highlighted the value and power of benchmarking, and demonstrated that labs that have very different testing regimes (active pharmaceutical ingredient [API], micro, biotech and packaging) suffer from the same operational issues and can be compared directly as long as there is careful normalization of the indices used. Tefen has continued to develop and add to the database further world-class companies and is in the position to be able to answer the question of what a best-in-class lab actually looks like. This article reports on the main findings from this benchmarking process, the lessons learnt and where to go next.

Figure 2 QC & QA cycle time breakdown.

The benchmarking survey

Initially, 15 QC labs joined the survey, from API, biotech and pharma formulation sites (including packaging and distribution). Three were based in the US, 11 in Europe and one in the Middle East. All were US Food and Drug Administration (FDA) registered good manufacturing practice (GMP) labs. During subsequent years, a further five companies were added to the database. For companies that performed a lean labs improvement programme in partnership with Tefen, data exists prior to and after the improvements were made.

The survey

A range of operational metrics for QC labs was established in six key areas:

  • staffing

  • cycle time

  • analyst availability and utilization

  • productivity

  • quality

  • support systems.

A questionnaire detailing all the required data for the various indicators was sent to the participants and the data validated via face-to-face interviews. After the report had been published, management presented the results and the recommendations for improvement.

Figure 3 Tests and samples per head.

Results

Full details of the results are restricted, however, the following provides a summary.

Staffing distribution. The average management:analyst ratio was 0.3, and the average direct:indirect labour ratio was 3 (with the highest score of 15). Lab assistants were used at 46% of sites to perform lower-skilled daily tasks (for example, glassware cleaning, sample management and stock replenishment [Figure 1]).

Cycle time. The quality operation cycle time was calculated from the sample receipt in the lab until the quality assurance (QA) release (Figure 2). The different cycle time performance of the various types of samples (for example, raw materials intermediates and finished product) were weighted according their volume in the lab. The overall quality operations cycle varies from 5-34 calendar days.

Case study

Analyst availability and utilization

The highest analyst availability across the survey was for the non-European Union (EU) participants; and the average was approximately 1700 hours per year per analyst. Analyst activities are split into two main categories, routine testing and other activities. Detailed studies show that with little or no support staff, 45-60% of analysts' time is spent on routine testing. For labs with support staff this is in the 70-85% range.

Productivity

Indicators of productivity include direct and indirect hours per test; tests and samples per head (Figure 3); and percentage of planned hours performing routine testing (Figure 4). The direct labour hours per test varied from 0.5-3.8 h and the tests per head were in the range of 50-250. There is usually a trade-off between analyst utilization and cycle time performance. The utilization is affected by several factors, which include backlog and samples undergoing testing, lab flexibility (in terms of training), analyst availability for testing and layout effectiveness. The cycle time is the counterbalancing indicator affected by the lab capacity and the planning and scheduling processes in the lab.

Figure 4 Percentage of planned hours performing routine testing.

Quality

Quality is measured primarily by lab yield (first time valid rate for the tests). This is measured by considering the total number of tests that failed per year because of QC lab error. The average was very high at more than 99.7%.

Support systems

Most of the labs had LIMS systems. Two sites had no LIMS system, but managed all their samples on the site-wide production and inventory control system, or on an in-house database. Half the sites were considered to have a need for an automated scheduling system to manage prioritizing and allocating work.

Figure 5 Finished products cycle time breakdown.

What did the participants learn?

Benchmarking survey results can be very sensitive within an organization, both at local and corporate level. They have to be handled with some care. The relative positions on the charts are of great interest to everyone involved and yet, the participants invariably feel that their operations are somehow different and more complex. The real value in the survey results is to highlight areas for improvement.

The benchmarking survey was used in all cases as the basis for recommendations for operational improvements. The sites used this data to prioritize their improvement initiatives, identify opportunities for improvement and confirm areas of operational weakness. An added benefit was the ability to track performance against best-in-class before and after an improvement programme.

Figure 6 Proportion of QC 1 micro staff versus all staff on site.

The main lesson, however, is that the real benefits are only truly realized when a properly designed improvement programme is implemented. Companies involved in the QC benchmarking survey have gone on to realize great jumps in operational performance through improvement programmes. Examples of such programmes (and typical results achieved) include

  • cycle time reduction (30% reduction in receipt to release cycle times)

  • productivity improvement (20% reduction in costs or increase in testing volumes)

  • layout changes (20% reduction in testing cycle times)

  • system improvements — such as key performance indicators (KPIs) system implementation achieving better integration into the supply chain.

Lessons learnt

The benchmarking process is a cycle of data collection and feedback with a number of obstacles and potential roadblocks. The metrics must be detailed enough to be of use to individual companies, but able to be normalized across all companies and not be too complex.

Figure 7 Example performance indicator system.

The data collection process is also potentially controversial and sensitive because different interpretations of the definitions can easily cause argument. One conclusion from the first round of the survey was that independent analysts must perform site visits, interviews, observations and analysis of data systems to establish what the real values are (Figures 5 and 6). Definitions must also be very clear (for example, what constitutes a "sample" and "direct" labour).

A number of measures need to be extracted at a higher level for corporate and senior executive functions, such as:

  • Percentage of resource performing routine testing (budget that is actually doing value added work).

  • Yield and retests (failures and deviations that are because of QC lab error).

  • Cycle time performance (and on time delivery to customer).

  • Overall lab effectiveness (hybrid measure of efficiency and yield).

More qualitative measures can be included that outline what constitutes a world-class lab, an analysis of the management structure and effectiveness of meetings, mapping of deviations, out-of-specification (OOS) process, and corrective-action-preventive-action (CAPA) systems. Also, the survey should be expanded to include all QC support functions, the speed of equipment validation and effectiveness of QA.

Figure 8 Visual lab (example testing workstation).

What constitutes a world-class lab?

Although detailed analysis is required to determine the operational effectiveness of a lab, there are some clues that can be used to tell at a glance if the lab is world-class. Good lab operations always include the effective use of KPIs (Figure 7) that are highly visible, widely understood, relevant to the weekly operations in the lab, easy to measure and have achievable but challenging targets. Other factors include the use of lab stewards, the close proximity of QC support functions, the close proximity of documentation areas and frequent presence of management in the lab.

Truly world-class labs also have aggressive reduced testing programmes, particularly in raw materials labs, an active programme of continuous improvement, efforts to create a "visual lab," (Figure 8) and planning and scheduling systems. Finally, world-class labs are constantly trying to achieve the balance between compliance, cost effectiveness and customer service, and recognizing that this is an ongoing issue that must be constantly readdressed.

What next?

Endorsements from benchmarking participants from the first round of the survey were very positive regarding the process. Change is required in even the best companies and change is almost always resisted somewhere. The real value of benchmarking is as an improvement tool and agent of change. All the companies in the first round embarked on programmes or projects aimed at making their labs leaner and more effective to the business.

The success of the first round and the growth of the database in the intervening time have led to the need to launch another round of benchmarking of quality operations. This time, the survey incorporates the lessons learnt, updated metrics and an expanded scope. The survey will be extended to provide meaningful measures for QA and all QC support functions. It is hoped that more R&D labs will participate, as will labs from other industries such as cosmetics, chemical and hospitals.