Statistical Solutions: Visual Inspection Goes Viral

Published on: 
Pharmaceutical Technology, Pharmaceutical Technology-09-02-2010, Volume 34, Issue 9

To properly inspect based on measurement, a reference standard is crucial for comparison.

"The production inspectors looked at each vial three times. That is, they did three 100% inspections. How can there still be lint, black specks, and other particulate matter in the vials?" The newly hired statistician was watching the visual-inspection line and feeling completely stumped and at a total loss for understanding what was happening. Not only did the second 100% inspection find rejects, but it actually found more rejects in the second round than during the first inspection. Although the third inspection found fewer rejects, some were still present. How can this be?

Lynn D. Torbeck

100% Background

One-hundred-percent visual inspections by trained inspectors play a crucial role in biopharmaceutical quality control. Incoming materials such as rubber stoppers, glass vials, and ampoules are visually inspected for physical defects as part of an overall receiving plan. In-process units are visually inspected for fill levels and particulate matter. Finished units are visually inspected for defects, stoppers with spots, bent caps, incorrect labels, and general appearance.

Despite all these reviews, 100% visual inspection by even well-trained and experienced production inspectors has been shown to be only about 80–85% effective. It is not humanly possible to visually inspect and remove 100% of occurring defects even in the best of conditions. Thus, second and third 100% inspections of a lot will continue to reveal defects. Individual inspectors are often tested by reviewing a standard test set that is known to have a certain number of defects. If the inspector finds 85% of the defects, his score is considered very good.

Common practice

Advertisement

To complete a total inspection plan, it is common for the quality unit to follow the 100% inspections with an attribute sampling plan with a sample size of 200 to 500, depending on the situation. This step may seem unnecessary to the uninitiated, but it is good practice. The 100% inspection is fragile, in a sense. There are many potential sources of error and poorly controlled variables that can affect the 100% approach. An additional sample test performed by the quality unit provides a sanity check to demonstrate whether the 100% inspection has had a complete breakdown. If the sample passes the accept/reject criteria, then the lot can be sent to packaging. If the lot fails the criteria test, it is 100% inspected again. Most companies do not perform more than three 100% inspections followed by three sampling inspections. If a lot fails the third sampling test, then it is placed on quarantine until a material review-board decision can be made.

Several factors contribute to this dilemma. First, humans are fallible and highly influenced by attitudes and day-to-day feelings of well-being, illness, and fatigue. Supervisors' expectations affect the ability of an inspector to find defects. Furthermore, what may be visible to one inspector may not be visible to another. Eye tests are routine and frequent rest periods are therefore mandatory.

The physical set-up for performing 100% inspections is also crucial. The amount of lighting, background, timing, among other factors, can affect the results. Finally, an absolute standard is needed to compare results against. Without a standard for comparison, measurement can and does run wild.

Inspection in action

Consider, for example, a manufacturing company that was using an industry standard 100% visual-inspection program to review finished vials. The program had been in place for several years and results of the visual inspections for spots on stoppers had typically been in the 2–5% range. However, over the course of several weeks, the cull rate escalated from 5% to more than 20%.

Alarmed, the quality unit conducted an investigation, but was unable to identify a clear reason for the dramatic increase. The inspectors and environment had not changed. Puzzled, the company brought in a consultant to assist with the investigation.

The consultant pointed out a fundamental measurement rule and asked, "Where is the standard for spots on stoppers?" The reply was, "There is none. Each inspector uses their best judgment." The consultant responded, "Without a standard, there is no basis for measurement."

Essentially, each inspector was using his own floating scale without any reference point. As the inspection continued, one inspector pointed out that he found a few more spots on the vials than usual. Sensitized to this information, the other inspectors became more vigilant and began to find more spots themselves. As word spread, the inspectors became hypersensitive, finding more and more spots as each lot was inspected until they were identifying more than 20% rejects.

Some identified spots were so small that the supervisors and managers could not see them even when pointed out by the inspectors. Without a standard for reference, any perceptible dot became a defect. A team discussion led to a defined specification: a spot below a certain size was not to be counted as a defect.

Summary

The company described in the above example soon adopted the TAPPT dirt estimation chart (1) as its reference standard. Each operator was given an original card for use during inspection. If any questions arose, the inspectors could compare potential defect spots to the chart and also review them with the supervisor.

Within one working shift, the defect level for spots on stoppers dropped back to the 2–5% range. The lesson learned: measurement is a comparison to a reference standard.

Lynn D. Torbeck is a statistician at Torbeck and Assoc., 2000 Dempster Plaza, Evanston, IL 60202, tel. 847.424.1314, Lynn@Torbeck.org, www.torbeck.org.

Reference

1. Dirt and Size Estimation Charts, www.tappi.org/Standards-TIPs/Dirt-Size-Charts.aspx, accessed Aug. 19, 2010.