Medical Device Daily Washington Editor

WASHINGTON – Mark McClellan, the head administrator for the Centers for Medicare & Medicaid Services (Baltimore) yesterday announced that he has resigned from his job, effective some time in the next five weeks.

McClellan is reportedly seeking work with a think tank, possibly with the American Enterprise Institute (Washington) where he was a scholar prior to his appointment as chief of the FDA. He took the job of FDA commissioner in November 2002 and left in March 2004 for the CMS post.

The 43-year old McClellan holds two doctorates, one in economics, another in medicine, both from the Massachusetts Institute of Technology (Cambridge). He reportedly will seek a position writing on healthcare economics. McClellan complained that his twin 7-year-old daughters “don't remember me in a job where I got home regularly for dinner. It's just time.”

Observers attempting to predict the next appointee at CMS are zeroing in largely on Leslie Norwalk, the current deputy administrator at CMS. Norwalk specialized in healthcare law at the Washington law office Epstein Becker & Green and served in the first Bush White House in the Office of Presidential Personnel.

But several others are possible successors. One name making the rounds is Herb Kuhn, head of the agency's Center for Medicare Management, a post he took in 2004. Previously, Kuhn was corporate vice president for advocacy at Premier (Charlotte, North Carolina), a not-for profit hospital alliance. Norwalk did not return a call for comment.

McClellan is the brother of Scott McClellan, President Bush's beleaguered former press secretary who resigned from that post in April, and the son of Carol Keeton Strayhorn, currently the comptroller of the state of Texas and an independent candidate for that state's governorship.

Dick Davidson, president of the American Hospital Association (Chicago), said that McClellan “brought to his post the real-world expertise needed to serve in one of the most difficult jobs in government.”

Less is more for clinical trials

A team of researchers at St. Jude Children's Research Hospital (Memphis, Tennessee) has come up with a statistical tool that will allow researchers to use historical data to establish whether a new treatment works better than its predecessors. By comparing data from an ongoing study with that from one or more completed studies, researchers may be able to cut down on study costs and find out sooner whether the trial for a new treatment is worth continuing.

The basic statistical tool in question, interim analysis, is part of the Bayesian statistical toolkit that the FDA is promoting. It has been used for some time by research, but, up until now, they could use interim analysis only within a study. As sponsors of studies know, any interim analysis that indicates that subjects enrolled in the control arm of a study are doing better than those receiving the studied treatment forces the sponsor to terminate enrollment and treatment. Conversely, if the new treatment is decidedly better than the control treatment, those in the control arm must receive the new treatment regime.

However, a new tweak to the math behind interim analysis may now allow researchers to examine data from existing studies to more quickly determine whether a new treatment is effective.

The St. Jude report indicates that the tweaked statistical tool, called the sequential conditional probability ratio test (SCPRT), has appeared in the literature at least since 1998. But Xiong, Tan and Boyett, the team at St. Judes that wrote the paper explaining this concept, introduced so-called Brownian motion into the SCPRT model. Brownian motion in statistical terms is a model that explains seemingly random data and originated as observations of random movement of particles in fluid.

The authors comment that in some trials, putting patients on controls is not practical because the numbers are too small for randomization. They also caution that such a sequential comparison requires that researchers have access to “historical data with required quality and sample size . . . to form a valid reference.”

The St. Jude report involves a trial for the use of amifostine (a.k.a., Ethyol) to treat hearing loss in children who are under treatment for medullablastomas, a brain-stem tumor. The tumor treatment was itself the subject of a study that was ongoing when the St. Jude team decided to introduce the amifostine to treat hearing loss associated with the high-dose chemotherapy and radiation under study. Hence, obtaining readily comparable study subjects was not especially difficult in this instance.

Xiong told Medical Device Daily that such an approach is not utterly new, but that “[t]he most important property of our procedures is that the conclusion obtained from partial data at an early stopping is very unlikely to be overturned by the test at the planned end with conclusion drawn from full data.”

He added that the insertion of Brownian motion into SCPRT “does not affect the probability of type 1 error, provided that the sample size of existing data is sufficiently large, and the setting of historical study is about the same as the current study.” In clinical studies, a type I error is that in which a treatment is rejected as ineffective when the opposite is true, effectively an acceptance of the null hypothesis.

Precisely how this approach will affect the size of enrollment is difficult to decipher. Xiong said that “sample sizes for new and control studies depends on the variances of the data, and the type I and type II error, and the difference to be detected, which are to be determined by the investigators.”

Sequential data, or the data culled from a specific part of the study enrollment, is typically compared to that of the entire enrollment and is now required by the National Institutes of Health as part of ongoing study analysis for many trials. In the paper, published in August in the journal Statistics in Medicine, the authors conclude that in order for this method to work, “the times for the interim looks are either preset or chosen without dependence of data observed at earlier stages.”

Those who deal with study data in other fields might also want to take note. In the conclusion section of the paper, the authors remark that though this mechanism was developed for clinical trial use, the method “can be used to solve a similar statistical testing problem that can be frequently found in other fields as well.”