Experimental drugs are increasingly being tested in combinations from the early stages of drug development, with the rationale that such combination treatments provide added benefit over each single agent by interacting with each other.
Now, researchers from Harvard Medical School argue that drug independence, rather than additive or synergistic effects, can explain the majority of trials where drug combinations prove superior to single agents.
In other words, patients given a drug combination do not do better because the drugs interact, but because some patients respond to one drug and some to the other.
Senior author Peter Sorger, Otto Krayer Professor of Systems Pharmacology at Harvard Medical School and the paper's senior author, emphasized that the team's point is not to question the benefit of drug combinations.
"We have questioned the underlying rationale for benefit," he told BioWorld.
"This doesn't mean that there isn't real benefit – independence is real and surprisingly powerful."
In fact, he said, the results show that "there is a design principle that can be exploited" to identify those cases where there is a benefit due to drug interactions – and that such combinations would be "manyfold better" than current combinations, which have already improved outcomes for patients.
Sorger and first author Adam Palmer, a postdoctoral fellow in Sorger's lab, were as surprised as anyone by their conclusions, which they published in the Dec. 15, 2017, issue of Cell.
"I started by trying to understand the molecular reasons for why drug combinations are effective," Palmer told BioWorld. "Ultimately, I ended up disproving that hypothesis."
When Palmer and Sorger re-analyzed studies of patient-derived xenografts as well as data from 15 clinical trials of combination therapies, in most cases the benefit of combination trials could be explained by assuming they contained three kinds of patients – namely, those who responded to one drug, those who responded to the other drug, and nonresponders.
Some trials did show additive effects. Combining the experimental MEK inhibitor binimetinib (Array Biopharma Inc.) with Kisqali (ribociclib, Novartis AG) in melanoma, or with buparlisib (Novartis AG) in pancreatic adenocarcinoma, led to improvements that were not due to independent drug effects.
But independent action was sufficient to explain the superiority of combination treatments, including Herceptin (trastuzumab, Roche Holding AG) and chemotherapy in breast cancer, Opdivo (nivolumab, Bristol-Myers Squibb Co.) and Yervoy (ipilimumab, Bristol-Myers Squibb Co.) in melanoma, and Lynparza (olaparib, Astrazeneca plc.) combined with platinum-based chemotherapy in ovarian cancer.
In the near term, Sorger said, the results provide a "very strong theoretical foundation" for the idea that "combination therapies... should always be used to treat patients when the toxicities, the adverse effects, are acceptable."
The ideal solution from both a toxicity and a financial standpoint would be to learn how to predict for each individual patient which drug is the effective one. Currently, though, "it is very difficult to predict if they will respond better if they receive treatment A or if they receive treatment B," Palmer said. "This is what precision medicine is trying to solve."
In the absence of such predictive capabilities, another possibility is to observe in progress, Sorger said, though that would require changes in standard treatment protocols. Currently, "we don't tend to take patients, put them on combinations, and try to figure out which one is working."
In part, that is because there is the assumption that both partners are contributing to the treatment effect. But Edith Perez, vice president of biooncology at Genentech, noted that there are two additional caveats to that approach in the clinic.
Overall, she said, the work represents an interesting addition to earlier ways of understanding and predicting drug interactions, such as the combination index developed by Chou and Talalay.
In clinical practice, though, increased monitoring would mean taking multiple biopsies from patients on a repeated basis, a strategy that she called "very implausible" for most tumors.
Even if such monitoring were to become plausible, for example, through advances in liquid biopsies and biomarkers, the work of Palmer and Sorger itself shows that drugs can work in unexpected ways.
"Sometimes we think we know how a drug works based on preclinical studies," Perez told BioWorld, "but in humans, they work in mysterious ways that we did not anticipate."
Sorger said that the work is an example of the power of data science. "Adam [Palmer] took a 15-year-old dataset and stuck it in a computer," and gained new insights into why combinations are effective.
He also argued that such analyses are a too-rare part of biomedical research. "For all of what you read in the popular media, the investment in data analysis is really vanishing compared to data collection," he said. "Over and over again in biomedicine we see examples of collecting data and not applying modern analysis."
Furthermore, attempts at modern analysis are made more difficult by the fact that clinical journals still tend to treat data as proprietary.
In 2016, the editor-in-chief of The New England Journal of Medicine called scientists undertaking re-analyses "data parasites," though he quickly backtracked on the specific phrase.
In basic research, Sorger said, there has been "a huge amount of push towards open-source data," and publishers have been "very progressive in this area."
But "unfortunately," he added, "the clinical journals are lagging very far behind....The place where this would be most valuable, where the most people have died, is last in line for innovation."