Although pharmaceutical manufacturers support CER generally, industry is skeptical that PCORI could shift the focus away from
drug–drug comparisons and toward the more difficult task of assessing medical procedures and methods of care. CER should not
be used to hinder access to new technologies, but to ensure their appropriate use, observed Newell McElwee, executive director
of US outcomes research at Merck, during the CER Summit.
This focus requires appropriate CER study designs and methods, to be developed by the PCORI methodology committee. GAO is
expected to name the committee members shortly so it can meet a tight 18-month deadline for issuing initial recommendations.
The committee will include representatives from NIH and AHRQ, as well as academics and industry scientists.
The methodology panel will weigh criteria for internal study validity, generalizability, feasibility, timeliness, and selection
of appropriate comparators. This process will involve comparing the strengths and weaknesses of observational studies with
those of randomized controlled trials (RCTs). One goal is to develop common definitions for many activities and processes
involved in CER, ideally replacing the diverse guidelines and requirements set by various payers and researchers. For example,
an effort to assess neuro-rehabilitative technology by Medicare uncovered 46 ways to measure "walking," explained Medicare
Coverage and Analysis Group Director Louis Jacques during the CER Summit.
Better research methodology is crucial because the thousands of systematic reviews and randomized trials published annually
provide little useful evidence that can inform health-policy decisions, said Sean Tunis, director of the independent Center
for Medical Technology Policy (CMTP). Poor selection of research questions and study design, he noted, has led to serious
gaps in evidence.
PCORI will be able to obtain additional advice from special expert panels that may be formed to address research priorities
and the design and management of CER clinical trials, particularly those on rare diseases. These more specialized committees
may include technical experts from manufacturers involved in the relevant project or topic.
An important consideration for manufacturers is whether the demand for more information on how drugs and clinical treatments
work in real-world settings will expand the scope of data needed to bring new drugs to market. The US Food and Drug Administration
does not require comparative or cost information for new drug approval, although such information is requested by some foreign
regulatory authorities, and patient advocates would like Congress to follow suit. Manufacturers and FDA officials oppose such
mandates, even though most sponsors now include comparative and clinical-use measures in preapproval trials to meet requests
from private payers. Decisions on advancing from Phase II to Phase III studies increasingly involve the feasibility of gathering
evidence of product value during development.
A related issue is whether comparative clinical information from observational studies is acceptable, or if this initiative
will require more RCTs, which remain the FDA gold standard for obtaining definitive scientific evidence. Most comparisons
of medical products and treatments by AHRQ and others tend to involve reviews and meta-analysis of existing studies and information.
These comparisons are less costly and can be done quickly. Large, long-term observational studies that follow patients over
several years are becoming more feasible with the proliferation of health-system databases that provide prospective and retrospective
patient treatment information. FDA increasingly includes such efforts as part of its postapproval study programs designed
to identify adverse drug events and to assess product safety over time.
However, such assessments are only as good as the studies available for review, and the results often fail to provide evidence
that is sufficient enough to support complex treatment decisions. Robert Temple, deputy director for clinical science at FDA's
Center for Drug Evaluation and Research (CDER), has long articulated the agency's concerns about relying on less rigorous
comparative studies to document drug efficacy. He explained at the Drug Information Association (DIA) annual meeting in June
2010, that it's hard enough to detect differences in efficacy between a test drug and placebo even in a well-controlled RCT.
He said it is more difficult and costly to show comparative effectiveness among multiple treatments or to demonstrate product
superiority. Temple added that superiority studies often are clouded by faulty designs with too-low comparator doses, less
healthy patient populations, and biased endpoints. He acknowledged that it's tempting to try meta-analysis and cross-study
comparisons, but that lack of randomization and the potential for bias make such analysis "treacherous."
Hans-Georg Eichler, senior medical officer at the European Medicines Agency, was more positive about the value of outcomes
studies, and noted at the DIA meeting that there is a push for regulators to require more "relative effectiveness" data for
sponsors to obtain market authorization. However, Eichler acknowledged that excessive data demands could kill off research
and development projects.
A main concern for Janet Woodcock, director of CDER, is that obtaining data to inform real-world treatment decisions in the
US requires more studies run by community-based clinicians with access to local patients. But the lack of investigators and
study participants at home is prompting pharmaceutical companies to conduct more and more trials overseas, Woodcock explained
at a CER briefing sponsored by the Center for Medical Technology Policy in July. "We need to enable community doctors to join
the research enterprise," she advised, but noted that the complex consent process, privacy issues and inadequate information-technology
system discourage such involvement.