Early-phase method parameters requiring validation
In early development, one of the major purposes of analytical methods is to determine the potency of APIs and drug products
to ensure that the correct dose is delivered in the clinic. Methods should also be stability indicating, able to identify
impurities and degradants, and allow characterization of key attributes, such as drug release, content uniformity, and form-related
properties. These methods are needed to ensure that batches have a consistent safety profile and to build knowledge of key
process parameters in order to control and ensure consistent manufacturing and bioavailability in the clinic. In the later
stages of drug development when processes are locked and need to be transferred to worldwide manufacturing facilities, methods
need to be cost-effective, operationally viable, and suitably robust such that the methods will perform consistently irrespective
of where they are executed. In considering the purpose of methods in early versus late development, the authors advocate that
the same amount of rigorous and extensive method-validation experiments, as described in ICH Q2 Analytical Validation is not needed for methods used to support early-stage drug development (5). This approach is consistent with ICH Q7 Good Manufacturing Practice, which advocates the use of scientifically sound (rather than validated) laboratory controls for API in clinical trials (6).
Additionally, an FDA draft guidance on analytical procedures and method validation advocates that the amount of information
on analytical procedures and methods validation necessary will vary with the phase of the investigation (7).
Table I: Summary of proposed approach to method validation for early- and late-stage development.
IQ's perspective regarding which method parameters should be validated for both early- and late-stage methods is summarized
in Table I. In this table, identification methods are considered to be those that discriminate the analyte of interest from
compounds with similar (or dissimilar) structures or from a mixture of other compounds to assure identity. This category includes,
but is not limited to identification methods using high-performance liquid chromatography (HPLC), Fourier transform infrared
spectroscopy (FTIR), and Raman Spectroscopy. Assay methods are used to quantitate the major component of interest. This category
includes, but is not limited to drug assay, content uniformity, counter-ion assay, preservative's assay, and dissolution measurements.
Impurity methods are used for the determination of impurities and degradants and include methods for organic impurities, inorganic
impurities, degradation products, and total volatiles. To further differentiate this category of methods, separate recommendations
are provided for quantitative and limit test methods, which measure impurities. The category of "physical tests" in Table
I can include particle size, droplet distribution, spray pattern, optical rotation, and methodologies, such as X-Ray Diffraction
and Raman Spectroscopy. Although representative recommendations of potential parameters to consider for validation are provided
for these physical tests, the specific parameters to be evaluated are likely to differ for each test type.
When comparing the method-validation approach outlined for early development versus the method-validation studies conducted
to support NDA filings and control of commercial products, parameters involving inter-laboratory studies (i.e., intermediate
precision, reproducibility, and robustness) are not typically performed during early-phase development. Inter-laboratory studies
can be replaced by appropriate method-transfer assessments and verified by system suitability requirements that ensure that
the method performs as intended across laboratories. Because of changes in synthetic routes and formulations, the impurities
and degradation products formed may change during development. Accordingly, related substances are often determined using
area percentage by assuming that the relative response factors are similar to that of the API. If the same assumption is used
to conduct the analyses and in toxicological impurity evaluation and qualification, any subsequent impurity level corrections
using relative response factors are self-corrective and hence mitigate the risk that subjects would be exposed to unqualified
impurities. As a result, extensive studies to demonstrate mass balance are typically not conducted during early development.
In addition to a smaller number of parameters being evaluated in preclinical and early development, it is also typical to
reduce the extent of evaluation of each parameter and to use broader acceptance criteria to demonstrate the suitability of
a method. Within early development, the approach to validation or qualification also differs by what is being tested, with
more stringent expectations for methods supporting release and clinical stability specifications, than for methods aimed at
gaining knowledge of processes (i.e., in-process testing, and so forth). An assessment of the requirements for release- and
clinical-stability methods follows. Definitions of each parameter are provided in the ICH guidelines and will not be repeated
herein (5). The assessment advocated allows for an appropriate reduced testing regimen. Although IQ advocates for conducting
validation of release and stability methods as presented herein, the details are presented as a general approach, with the
understanding that the number of replicates and acceptance criteria may differ on a case-by-case basis. As such, the following
approach is not intended to offer complete guidance.
Specificity. Specificity typically provides the largest challenge in early-phase methods because each component to be measured must be
measured as a single chemical entity. This challenge is also true for later methods, but is amplified during early-phase methods
for assay and impurities in that:
- The chemical knowledge regarding related substances is limited.
- There are frequently a greater number of related substances than in commercial synthetic routes.
- The related substances that need to be quantified may differ significantly from lot-to-lot as syntheses change and new formulations
A common approach to demonstrating specificity for assay and impurity analysis is based on performing forced decomposition
and excipient compatibility experiments to generate potential degradation products, and to develop a method that separates
the potential degradation products, process impurities , drug product excipients (where applicable), and the API. Notably,
requirements are less stringent for methods where impurities are not quantified such as assay or dissolution methods. In these
cases, specificity is required only for the API.
Accuracy. For methods used in early development, accuracy is usually assessed but typically with fewer replicates than would be conducted
for a method intended to support late-stage clinical studies. To determine the API in drug product, placebo-spiking experiments
can be performed in triplicate at 100% of the nominal concentration and the recoveries determined. Average recoveries of 95–105%
are acceptable for drug product methods (with 90–110% label claim specifications). Tighter validation acceptance criteria
are required for drug products with tighter specifications. For impurities, accuracy can be assessed using the API as a surrogate,
assuming that the surrogate is indicative of the behavior of all impurities, including the same response factor. Accuracy
can be performed at the specification limit (or reporting threshold) by spiking in triplicate. Recoveries of 80—120% are generally
considered acceptable, but will depend on the concentration level of the impurity. For tests where the measurements are made
at different concentrations (versus at a nominal concentration), such as dissolution testing, it may be necessary to evaluate
accuracy at more than one level.