OR WAIT null SECS
Before we can intelligently set the limits of process variation, we need to know what the clinical impact of that variation will be.
Most impacts of minor drug manufacturing are clinically undetectable. Many of us have probably caught ourselves thinking that from time to time, especially when trying to stay between the lines of a particularly twitchy specification.
I'm not referencing my own late-night musings here, though, or recalling a whispered bar-room heresy.
This is, rather, what Janet Woodcock, MD, said in one of her first presentations as the US Food and Drug Administration's first Chief Medical Officer, delivering the keynote address to this year's IFPAC meeting (held in Baltimore at the end of January).
We all worry, properly, about a 10% variation in API load, and dread a 15% error. But how, Dr. Woodcock asked, do manufacturing variations—the variations we spend so much time and money to control—how do they really compare with the errors built into the rest of the healthcare delivery system?
Sources of variation
Consider a fairly routine 1.5 × patient-to-patient variation in size or liver function. Consider the variations in uptake and excretion that come with age.
Consider, she said, the 4 × variation in therapeutic effect stemming from genetic variations.
And consider, finally, the variations in doctors' prescribing habits and patients' compliance (where some patients don't skip more doses than they take, while others take extra on the if-some-is-good-then-more-must-be-better principle).
Woodcock quickly pointed out that there are important exceptions, drugs for which close dosage control is critical (thyroxine, for example).
But the exceptions only underscore her larger point: before we can intelligently set the limits of process variation, we need to know what the clinical impact of that variation will be.
And, right now, we don't.
Causes of cost
And why is an intelligent approach to quality essential? Cost, of course.
Mechanical engineers have a rule of thumb relating tolerances to cost: if it costs $1000 to get 90% in-spec, it will cost $2000 to reach 99%, and $4000 to reach 99.9%. As a first guess, you can expect six-nines, one-in-a-million reliability to cost $32,000.
Quality costs. And, in today's environment—with pipelines running dry and the costs of clinical trials and manufacturing surging—is it right to make the patient pay for quality that has no impact on his or her treatment?
"It looks like a train wreck," Woodcock told her IFPAC audience, "with [R&D] productivity problems going one way, and ... the healthcare [cost] crisis running on the same track, going in the other direction .... Society is very, very sensitive to the cost of pharmaceuticals. So something is going to have to give in this equation."
Eyes on the road ahead
The key is process knowledge at a level of sophistication few of us have dared to contemplate. And, in the end, manufacturing understanding has less to do with the chaotic behavior of bulk powders in a bin blender than it has to do with the final behavior of the active ingredient in the patient.
The way to avoid the train wreck, Woodcock told her audience, is to modernize, moving even further away from "empirically derived, qualitative methods" to a rigorous, scientific approach focused on the patient.
That, ultimately, is the fixed standard against which production performance will be judged. It will be a long time coming, of course, and current acceptance criteria will likely be with us for most of a generation. But it's like driving a car on a winding road: if you don't keep your eyes on the road ahead, you're sure to wind up in a ditch.
Douglas McCormick is editor in chief of Pharmaceutical Technology, firstname.lastname@example.org