OR WAIT null SECS
A new center may provide evidence for improving care, but could discourage coverage of treatments.
There is great optimism throughout the healthcare community that comparative-effectiveness research (CER) will enhance the nation's healthcare system and curb unwarranted spending. To this end, the US federal government is poised to invest some $500 million a year on research for how to prevent, diagnose, treat, monitor and manage disease, as authorized by the Affordable Care Act (ACA) enacted last March.
Although Republicans initially supported comparative research as a way to improve healthcare decision-making, last year some conservative groups raised the specter of rationing and "death squads" in an effort to end healthcare reform. Even among CER backers, there is concern that the initative could limit access to certain treatments and skepticism about just how big the savings will be. Pharmaceutical and medical product companies, as well as some patient groups, are wary that government-funded CER will steer health coverage toward low-cost remedies, and away from more expensive new products often accused of driving up healthcare costs. Personalized medicine advocates, moreover, are pressing for CER to consider treatment effects on patient subpopulations, including minorities, children, and individuals with uncommon health problems—as opposed to researching what works best in the average patient. Yet, designing studies able to detect such differences is complicated and may increase the scope and cost of research.
The government met a Sept. 23, 2010 deadline for naming a board of governors for the new Patient-Centered Outcomes Research Institute (PCORI). The board now must set the national CER agenda, develop systems for funding research, establish standards and methods for CER studies, and support programs to disseminate results to practitioners and the public. Gail Wilensky, senior fellow at Project Hope and admittedly an outspoken advocate for CER, says she's "cautiously optimistic" about the progress so far, but acknowledged at a recent comparative-effectiveness summit in Washington, DC, that the program remains a "very fragile concept" that "a lot of people still want to torpedo."
The CER bandwagon has been rolling for several years under the guise of "evidence-based medicine," "health technology assessment," and "relative effectiveness" research. All promote the use of rigorous, unbiased assessments of alternative treatments as a way to identify the most effective medical therapies and practices, as well as those that are inappropriate or even harmful (such as routine use of estrogen replacement therapy for post-menopausal women).
To date, most CER initiatives focus on the comparative costs and effectiveness of new drugs and medical technology. The United Kingdom's National Institute for Health and Clinical Excellence (NICE) is recognized worldwide for its recommendations—pro and con—on new drug coverage. Blue Cross/Blue Shield insurance companies have supported technology assessments of drugs for more than a decade. The University of Oregon-based Drug Effectiveness Review Project (DERP) evaluates drugs for state Medicaid programs and other payers. The Veterans Administration's vast database supports comparisons of therapies and treatments such as oral diabetes medications and care for patients with acute kidney injury.
With the establishment of the Medicare drug benefit in 2003, Congress provided more support for this field by expanding the Effective Healthcare Program at the Agency for Healthcare Research and Quality (AHRQ). The agency has received nearly $150 million for CER during the past five years to solicit systematic reviews and literature syntheses related to prevalent conditions affecting Medicare beneficiaries.
The federal stimulus legislation enacted in 2009 (American Recovery and Reinvestment Act, or ARRA) dramatically advanced federal funding of CER by providing $1.1 billion for the Department of Health and Human Services (HHS) to set priorities for and support comparative studies. Much of the money went to AHRQ and the National Institutes of Health (NIH) to provide for CER projects and infrastructure development.
This year's healthcare-reform legislation built on ARRA by establishing PCORI as an independent, nonprofit organization with resources to support clinical trials and outcomes studies on effective treatments for common medical conditions. By 2014, this nongovernmental institution is slated to have a $500–600 million annual budget, funded largely by a 1% tax on health insurance premiums—a strategy designed to insulate the program from the highly political annual Congressional appropriations process and provide more stability and predictability.
PCORI's 21 board members, who were named by the Government Accountability Office (GAO) in September, represent payers, providers, patients and industry, with an emphasis on women and minorities. NIH Director Francis Collins and AHRQ Chief Carolyn Clancy are on the panel, but do not chair it; that honor goes to University of California at Los Angeles Vice-Chancellor and Dean Eugene Washington. Steven Lipstein, president of the nonprofit BJC Healthcare system, was named vice-chair. The industry representatives on the board include Pfizer's chief medical officer and senior vice-president, Freda Lewis-Hall; Johnson & Johnson's head of medical devices and diagnostics, Harlan Weisman; and Medtronics senior-vice president, Richard Kuntz.
In Washington this Month
"This is not the list of usual suspects," said National Pharmaceutical Council President Dan Leonard regarding the PCORI board assignments. All members are highly qualified and reflect diverse backgrounds, but have not been regulars on the CER circuit.
PCORI is expected to award many CER grants through NIH and AHRQ to take advantage of existing peer-review and research infrastructure. But before it can start funding research, the board has to hire a staff, locate offices, and issue a charter.
Although pharmaceutical manufacturers support CER generally, industry is skeptical that PCORI could shift the focus away from drug–drug comparisons and toward the more difficult task of assessing medical procedures and methods of care. CER should not be used to hinder access to new technologies, but to ensure their appropriate use, observed Newell McElwee, executive director of US outcomes research at Merck, during the CER Summit.
This focus requires appropriate CER study designs and methods, to be developed by the PCORI methodology committee. GAO is expected to name the committee members shortly so it can meet a tight 18-month deadline for issuing initial recommendations. The committee will include representatives from NIH and AHRQ, as well as academics and industry scientists.
The methodology panel will weigh criteria for internal study validity, generalizability, feasibility, timeliness, and selection of appropriate comparators. This process will involve comparing the strengths and weaknesses of observational studies with those of randomized controlled trials (RCTs). One goal is to develop common definitions for many activities and processes involved in CER, ideally replacing the diverse guidelines and requirements set by various payers and researchers. For example, an effort to assess neuro-rehabilitative technology by Medicare uncovered 46 ways to measure "walking," explained Medicare Coverage and Analysis Group Director Louis Jacques during the CER Summit.
Better research methodology is crucial because the thousands of systematic reviews and randomized trials published annually provide little useful evidence that can inform health-policy decisions, said Sean Tunis, director of the independent Center for Medical Technology Policy (CMTP). Poor selection of research questions and study design, he noted, has led to serious gaps in evidence.
PCORI will be able to obtain additional advice from special expert panels that may be formed to address research priorities and the design and management of CER clinical trials, particularly those on rare diseases. These more specialized committees may include technical experts from manufacturers involved in the relevant project or topic.
An important consideration for manufacturers is whether the demand for more information on how drugs and clinical treatments work in real-world settings will expand the scope of data needed to bring new drugs to market. The US Food and Drug Administration does not require comparative or cost information for new drug approval, although such information is requested by some foreign regulatory authorities, and patient advocates would like Congress to follow suit. Manufacturers and FDA officials oppose such mandates, even though most sponsors now include comparative and clinical-use measures in preapproval trials to meet requests from private payers. Decisions on advancing from Phase II to Phase III studies increasingly involve the feasibility of gathering evidence of product value during development.
A related issue is whether comparative clinical information from observational studies is acceptable, or if this initiative will require more RCTs, which remain the FDA gold standard for obtaining definitive scientific evidence. Most comparisons of medical products and treatments by AHRQ and others tend to involve reviews and meta-analysis of existing studies and information. These comparisons are less costly and can be done quickly. Large, long-term observational studies that follow patients over several years are becoming more feasible with the proliferation of health-system databases that provide prospective and retrospective patient treatment information. FDA increasingly includes such efforts as part of its postapproval study programs designed to identify adverse drug events and to assess product safety over time.
However, such assessments are only as good as the studies available for review, and the results often fail to provide evidence that is sufficient enough to support complex treatment decisions. Robert Temple, deputy director for clinical science at FDA's Center for Drug Evaluation and Research (CDER), has long articulated the agency's concerns about relying on less rigorous comparative studies to document drug efficacy. He explained at the Drug Information Association (DIA) annual meeting in June 2010, that it's hard enough to detect differences in efficacy between a test drug and placebo even in a well-controlled RCT. He said it is more difficult and costly to show comparative effectiveness among multiple treatments or to demonstrate product superiority. Temple added that superiority studies often are clouded by faulty designs with too-low comparator doses, less healthy patient populations, and biased endpoints. He acknowledged that it's tempting to try meta-analysis and cross-study comparisons, but that lack of randomization and the potential for bias make such analysis "treacherous."
Hans-Georg Eichler, senior medical officer at the European Medicines Agency, was more positive about the value of outcomes studies, and noted at the DIA meeting that there is a push for regulators to require more "relative effectiveness" data for sponsors to obtain market authorization. However, Eichler acknowledged that excessive data demands could kill off research and development projects.
A main concern for Janet Woodcock, director of CDER, is that obtaining data to inform real-world treatment decisions in the US requires more studies run by community-based clinicians with access to local patients. But the lack of investigators and study participants at home is prompting pharmaceutical companies to conduct more and more trials overseas, Woodcock explained at a CER briefing sponsored by the Center for Medical Technology Policy in July. "We need to enable community doctors to join the research enterprise," she advised, but noted that the complex consent process, privacy issues and inadequate information-technology system discourage such involvement.
Whether more comparative research will limit healthcare spending remains to be seen. The Congressional Budget Office estimated earlier this year that the CER initiative will reduce federal healthcare spending by some $3 billion over 10 years—just about what the government will spend on PCORI. And that calculation assumes that comparative studies will lead to changes in physician practice and patient choice. "It's one thing to measure effectiveness; it's a totally different thing for clinicians to have access to the information and use it," pointed out consultant David Axene at the CER Summit. Under PCORI, AHRQ will continue to lead efforts to disseminate CER research findings through guides for clinicians and consumers, along with research reviews and reports.
Potential savings from CER are further limited by Congress' stipulation that Medicare cannot use study results to establish cost-effectiveness thresholds, set practice guidelines, or make coverage or payment recommendations (i.e., PCORI should not become another NICE). However, private insurers and payers are free to tap CER evidence in their coverage decisions, as they have done for years. More outcomes studies will support efforts by payers to negotiate lower rates and steer consumers to more high-value care options.
Consequently, some analysts believe that CER studies should consider cost and pricing issues. At a briefing last month sponsored by the journal, Health Affairs, Harold Sox of Dartmouth Medical School recommends that CER studies include data on utilization and costs so that payers and the public can kno what they're paying for. And Harvard researcher Steven Pearson made the radical proposal that Medicare reward innovation by paying higher prices for products that can document superiority, but only a comparable or "reference" price for those demonstrating comparable clinical effectiveness; new products that fall in the middle would have three years after FDA approval to collect data supporting a premium price.
At the same time, efforts to limit or curtail treatment choices will remain difficult and require a very high threshold of evidence. "CER is not a panacea or a silver bullet," stated Kavita Patel, director of health policy at the New American Foundation, an independent research and poligy organization. Establishing PCORI is "a good first step" toward identifying ways to allocate resources more wisely, she says, but is wary of using CER primarily as a cost-cutting tool. She and others have advised the PCORI board to select a few studies that can move forward on a relatively short schedule so that the program will produce visible results within a year.
Jill Wechsler is Pharmaceutical Technology's Washington editor, 7715 Rocton Ave., Chevy Chase, MD 20815, tel. 301.656.4634, firstname.lastname@example.org.