LONDON – Big data has the power to change the way drugs are accessed, used and regulated, but there are barriers to its implementation – in complying with data protection legislation, engendering trust, understanding the strengths and limitations of datasets and developing analytical methods to reach conclusions that can be transferred through to regulatory decisions.
“We need to have a reflection and see how we can jointly address and take advantage of [big] data,” said Guido Rasi, executive director of the EMA, opening a workshop convened by the agency to identify opportunities for big data in drug development and regulatory science.
Electronic health data are being generated around millions of patients, providing an opening to better understand disease and track the safety of drugs. Meanwhile, through genomics it is now known that cancers have lots of subsets, calling for a more personalized approach and raising the prospect of diagnosing earlier and treating earlier.
At the same time, social media and smart devices are making it possible to incorporate the needs, will and opinions of patients into regulatory oversight.
The sum of big data “will enable us to make better informed decisions,” Rasi said.
The FDA is at the same stage as the EMA in coming to terms with big data, according to the agency’s David Martin. “We are asking for inputs, but also having to deal with big data from day-to-day,” he said.
The biggest impact to date has been in the establishment of the FDA’s Sentinel system for assessing drug safety signals. From its inception in 2008, the system has grown to include data on 200 million people, and it is used for monitoring the safety of approved drugs.
An example of how the information has been used in practice concerns the new-generation oral anticoagulant Pradaxa (dabigatran), which was expected to reduce incidences of bleeding compared to warfarin. However, following its approval by the FDA in 2010, adverse event reports seemed to imply dabigatran-treated patients were more likely to suffer bleeding episodes than those treated with warfarin.
Bleeds are such a common side effect of warfarin the likelihood is they are rarely reported. Analyzing Sentinel data it was possible to say there was no higher risk of bleeds with Pradaxa, avoiding the need for a long-term epidemiological study.
Over the next five years, the FDA is planning a push on big data, including public workshops and more pilot studies. “These are early days for us, too,” Martin said.
Patients also recognize the value of big data, according to Jean Georges, of the patients’ group Alzheimer’s Europe. His own organization has contributed to three EU projects harnessing big data to inform Alzheimer’s research.
The most recent project, launched at the start of November, will pull together real-world data on Alzheimer’s patients held in 75 separate national databases, with the aim of performing analyses to better inform regulators and help to elucidate biological mechanisms and pathways driving neurodegeneration.
“We have got feedback from patients and their carers living with the disease. They are recognizing the sharing of data is of vital importance,” Georges said.
There also are challenges, of ensuring informed consent for use and re-use of data, data privacy and disclosure of data. “Therefore, governance is of great interest to patients and patients’ organizations,” said Georges. “We must be involved in discussion on the use of big data.”
THE BIG DATA LANDSCAPE
If there is broad understanding of the potential of big data in drug development and regulation, it is another thing to marshal that resource, noted Lisa Latts, deputy chief health officer at IBM Watson, the computer company’s machine learning arm.
It is “humanly impossible” to stay on top of the volume of data, with 100,000 clinical trials in progress at any one time, a further 1.8 million papers added to the repository of 424 million articles in the Medline database each year and the amount of data in electronic health records doubling every 24 months.
IBM’s Watson system can analyze those huge amounts of structured and unstructured data to provide cognitive insights, learning and improving as more data are added to the mountain.
As one example of how that is being applied, at the beginning of the month, IBM announced a collaboration with Celgene Corp. to use big data to enhance pharmacovigilance and create an outcomes- and evidence-based drug safety decision support system for pharma companies.
The project will involve the collection, collation and automated analysis of data from sources including clinical trials, medical literature, regulators, social media and claims databases.
The result will be better management and interpretation of Individual Case Safety Reports, and improvements in picking up safety signals across a drug’s lifecycle.
Similarly, Nico Gaviola, business manager at Google, described how the company is applying lessons learned from searches carried out via its internet browser to big data in health care.
Around one in 20, or 60,000 searches per minute on Google, relate to health. Now, the company is establishing a service that will enable users to apply machine-learning tools to search. Rather than a list of 273 million hits on searching “diabetes” it will be possible to analyze the information therein.
“We will provide all the infrastructure, storage, access and tools to be able to do Google-type searches for a few dollars – there will be no investment in systems or developing applications,” Gaviola said.
Google intends to extend the service beyond publicly available information, to curate and manage proprietary data on behalf of clients.
REGULATORS, COMPANIES FACE BIG DATA DISRUPTION
Regulators “should be aware of the need to roll their sleeves up,” said Luca Pani, director general of the Italian regulator AIFA. In the future, the majority of data “won’t come from clinical trials,” and given that, regulatory agencies must establish “which data and when” they will factor in when deciding on marketing authorizations.
In addition, the EMA cannot continue to operate at a European level, but has to link to the FDA, to the PMDA in Japan and others. “These data are absolutely global,” Pani said.
The pharma industry has made a “declaration of intent” to use big data and is putting a large amount of money into relevant research, according to Richard Bergstrom, director general of the European Federation of Pharmaceutical Industries and Associations (EFPIA).
There is “a lot of nervousness,” in particular because technology companies are moving into pharma territory, and there are fears of the disruptive effects.
However, Bergstrom noted, it is also possible for the sector to change from within, and EFPIA members have shown they are willing to work together to deploy big data.
Changes in consumer demand and consumer behavior mean pharma will no longer be in control of all the knowledge about its products. “We have to relate to this, and the same is true for regulators,” Bergstrom said.
Pharma and health care is behind on the digitization curve. It is a very regulated sector, and that has isolated it from its counterparts. “Big data is going to change that,” Bergstrom predicted. “A common focus on outcomes can really make a difference. For industry, we will get paid for outcomes, not for pills.”
CEOs of EFPIA member companies are frustrated that data on outcomes to demonstrate value are not available as yet. EFPIA is working with academics and clinicians to address that in Bigdata4bigoutcomes, a project to design standardized outcome measures that can be used to demonstrate value; increase access to outcomes data; and use big data to drive better delivery of health care.