Medical Device Daily Washington Editor

WASHINGTON – Comparative effectiveness studies are receiving increasing emphasis in healthcare, and the emphasis yesterday at this year’s edition of the National Health Policy Conference was on the potential for comparative effectiveness studies to impact the cost of healthcare. But despite considerable optimism about the value of comparative effectiveness studies, the consensus is that medical science has a way to go before this type of data will have much impact on the cost of care.

Mark McClellan, MD, director of the Engleberg Center for Health Care Reform (Washington), said that comparative effectiveness “has been described as the potential salvation,” for reducing healthcare costs, “without sacrificing quality of care.” But he cautioned that such data could be used as “a way of restricting access ... if it’s done wrong.”

McClellan addressed the “well-known gaps in evidence,” but said that while “there’s a clear need to fill in that gap,” the question of how to do this is not easily answered.

Head-to-head clinical trials are of limited use, he said, adding “if you look at some of the well-known facts ... the huge variations we see in healthcare costs are not due” only to the use of drugs, but also “to subtle differences in medical practice.” He said that list would include frequency of follow-up appointments and the relative rates of scanning service usage.

“There are fewer and fewer ‘average’ patients out there,” McClellan said, a perspective that he predicted more widely adopted over time. “The gold standard is the prospective, double-blinded randomized trial,” McClellan said, adding that “while we have some really excellent methods” for conducting those trials, it is often “impossible to do on the kinds of treatments we most want to learn about.”

Part of the drag is patient resistance to randomization, he said. Others he cited are the difficulty in controlling for variation in the use of associated treatments and patient non-compliance with treatment.

Other data collection methods “might include registries and other types of observational studies,” McClellan said, but questions persist about their statistical validity. Data pooled from multiple payers/providers, that “may not be able to provide definitive answers on their own,” might nonetheless aggregately prove very informative, he said.

“Providing more funding support is important” in this effort, McClellan said, but that in the current federal budget environment, attendees should not count on “many billions of dollars” in federal support.

Carolyn Clancy, director of the Agency for Healthcare Research and Quality, (AHRQ) said of comparative effectiveness: “It won’t happen if it isn’t intrinsically linked” to care delivery. “Many see comparative effectiveness as crowding out other issues,” she said, noting that AHRQ also has responsibility for ambulatory patient safety programs and several healthcare information technology initiatives. The agency also conducts surveys to collect data on medical expenditures.

Still, Clancy said that everyone at AHRQ is “very excited that our budget [for comparative effectiveness research] has doubled” from $15 million in FY07 to $30 million in FY08, with the overall AHRQ budget for 2008 slightly more than $335 million.

The number of reviews and technical briefs on comparative effectiveness should double as a result, Clancy said, with AHRQ expecting to publish a new series of technical reports for gene-based test performance. Another initiative is a randomized, controlled trial that will clarify the value of genetic testing for warfarin (Coumadin) therapy.

AHRQ also is teaming up with the Centers for Disease Control and Prevention (Atlanta) “to review databases focusing on utilization and outcomes of gene-based tests and therapies,” Clancy said.

One of the AHRQ initiatives underway is the commissioning of a report to boost the impact of the Effective Health Care Program, a project designed to move comparative effectiveness data into practice. This report, Clancy said, will “get very concrete and specific about enhancing the program’s infrastructure,” and will ask “what kinds of models of public-private partnerships” would best enhance the program’s mission.

Clancy said that genomics “is already here,” and “increasingly becoming part of our mainstream work.” But she said that however exciting cutting-edge research might be, “we cannot discover our way to better value in healthcare.”

She said that healthcare decisions will require, in the aggregate, “many different types of evidence, and that the needs in the clinic are as important as abstract ideas about what constitutes comparative efficacy. “We’re not going to get better value unless we pay as much attention to the demand side,” she said.

Paul Wallace, the medical director of programs at the Care Management Institute operated by the Kaiser Permanente Foundation (Oakland, California), said that translational research is “an extraordinarily complicated task,” going on to describe a real-world model.

He said that of the 115 new technologies examined by Kaiser over the past year, only about 38 were demonstrably great ideas and seven were dead on arrival. The balance were “in the grey area,” and “our challenge is to shine a bright light on the grey area” because this is where most learning takes place.

Wallace said that Kaiser data comparing four knee replacement devices suggested that one of the four demonstrated an effective life that was significantly shorter than the other three.

“Do we abandon this device? That isn’t a smart approach, he said. Rather, Kaiser talks “to our surgeons [to] find the appropriate place” for a product, given its parameters.

As for lessons from evidence-based medicine, Wallace said that executives “must engage practicing clinicians” and “give them tools to navigate the grey areas,” adding: “I’m not going to say we nailed how to do these.”

Offering an affirmative statement in closing, Wallace said, “If incentives are not well aligned, better evidence alone is not sufficient to change physician or patient behavior.”

As is well known, AHRQ has access to device clinical trial data that it can use and publish in its analysis, a sore spot for a number of device makers.

However, Clancy told Medical Device Daily that “any time a specific firm is affected” by such disclosures, “we inform them at the outset.”

Generally speaking, the interaction has gone well, Clancy said. “However, we’ve had some lively conversations from time to time,” she quipped.

Despite the controversy, industry has generally concluded that AHRQ was “very fair” in the way it made use of the data in question, she said.