Medical Device Daily Washington Editor

Most who have tracked the actions of the FDA over time recognize that products that do not lend themselves to study via controls and placebos are inherently more difficult to test in a Phase III clinical trial. The recent decision by the blood products advisory committee (Medical Device Daily, Dec. 18, Dec. 19, 2006) to recommend that FDA maintain its hold on a Phase III trial for a blood substitute attests to that difficulty.

Count Acorn Cardiovascular (St. Paul, Minnesota) among those companies that have discovered that the clinical trial turf is often chock-full of holes for devices as well.

Despite a thorough vetting of a clinical trial design with the FDA in 2000, the dispute panel of the agency last week gave the company’s appeal for CorCap, a cardiac support device, the clay pigeon come-down, forcing the company to recruit more patients to beef up its clinical data.

The Phase III trial for CorCap employed a composite primary endpoint, which probably did nothing to smooth the way. This composite endpoint consisted of mortality, major cardiac procedures and change in New York Heart Association (NYHA) functional class.

Another current that ran counter to the device’s prospects early on was a mortality difference that initially trended against the company, but which had flipped by April 2005, with 29 deaths in the treatment arm and 33 deaths in controls.

As is typically the case when private industry goes to the FDA for approval, the petitioner had a lot of support.

In a prepared statement dated Oct. 25, five physicians on the steering committee for the trial said that more than 5 million in the U.S. “live with heart failure ... the most common diagnosis-related group for hospitalizations in the Medicare population” with a survival rate of around 50% over five years. Among this group is Michael Acker, MD, the chief of cardiovascular surgery at the University of Pennsylvania (Philadelphia).

The letter acknowledged that “designing such controlled clinical trials ... is a challenge, given the lack of a predicate device and the ethical concerns that would arise” with the use of a sham for the CorCap, which would require a needless thoracic procedure. The steering committee and the firm agreed to the agency’s demand to boost enrollment to 300 from the originally proposed 170, and to “extended follow-up from six to a minimum of 12 months and to further modify the primary endpoint” to one that conformed to the agency’s definition of clinically meaningful changes.

“In fact, at the annual meeting of the Association of Thoracic Surgeons” in April 2005, the letter states, “Dr. Craig Miller of Stanford University (Palo Alto, California) described this trial as the gold standard for surgical trials.” Miller is a professor of cardiothoracic surgery at Stanford and is on the editorial board of the Journal of Thoracic and Cardiovascular Surgery. The letter also makes note of “the many teleconferences and meetings” with the division of cardiovascular devices at the FDA “to collaboratively design a meaningful clinical trial.”

The split in the data collected for the study cohort was resolved by the use of a statistical tool known as the multiple imputation method. Clinical investigators collected the required baseline data for the first 174 of the 300 enrollees, but the change to the protocol demanded by the agency added more baseline data as well as the additional primary endpoint data, thus introducing a need for a mathematical tool to untangle the differences in the two data sets.

Multiple imputation is one such control, and in a report on the subject by Donald Rubin, PhD, a professor of biostatistics at the University of Georgia (Athens), Rubin argued that “contrary to the reviewer’s conjecture, several different analyses of the trial outcomes result in essentially the same conclusion, and that the results are not compromised by the use of multiple imputation to handle missing data.”

Rubin also asserted that “the quantity of missing data in the primary endpoint has been shown to be substantially less than was claimed during the advisory panel” and that the impact on NYHA scores was “quite small.”

However, the first issue that came up was that the agency wanted Acorn to blind the individuals who would rate the NYHA cardiac function scores after the trial had already commenced. The reason for that change, according to Rich Lunsford, president/CEO of Acorn, was that the FDA was of the opinion that conducting the NYHA assessment at this site might bias the scores. “They wanted an independent instrument outside our site to verify the assessment,” he said.

Lunsford told Medical Device Daily that the NYHA score is “very subjective” and that the FDA was further concerned about “the motivation for doctors to get patients into the trial.” He said he was under the impression that “they [the FDA] were fine with the configuration at the start of the trial,” despite the fact that “they did let us know that they had issues with bias.”

He said company executives were under the impression that “we had addressed them.”

Lunsford noted that while the first lead reviewer for the product was still with the agency in the 2005 panel meeting, the person in that chair had changed at last Friday’s meeting.

Lunsford said that the room was full of executives from other device firms, and that they were busy with pen and paper during the proceedings.

“I think that industry was surprised because it was with 300 patients,” a larger study cohort than has ever been recruited for this kind of device. “It had multiple committees and was modeled a lot like pharma trials, but the fact that we could not get approval was surprising.”

Lunsford offered the now-standard cautionary about doing business with the FDA. Despite the company’s extensive work to pin down the agency’s concerns going in, he recommended: “Get as much on the table as possible when developing endpoints with the FDA so that you’re able to define what the agency expects as far as clinically meaningful data ahead of time. It is really critical.”

On the other hand, “the first time we heard about the mathematical instrument was shortly before the second advisory panel” in 2005, Lunsford noted in reference to the multiple imputation question.