The comment period has closed on the U.S. FDA’s discussion draft for artificial intelligence (AI) in medical devices, a paper that attracted the attention of medical societies and regulated industry. One of the questions posed by industry was whether the FDA is in a position to deal with the massive volumes of data developers would have to disclose to the agency, creating concern that such disclosures would amount to little more than an obligatory and useless data dump.

The FDA’s April 2, 2019, discussion paper on regulation of AI in medical devices refers to the agency’s proposed regulatory framework for software as a medical device (SaMD), including the software pre-certification pilot. The discussion paper also references the SaMD risk framework published in September 2014 by the International Medical Device Regulators Forum. Whether the reference to the SaMD effort suggests the need for a novel regulatory mechanism for AI and machine learning (ML) is up for grabs, however.

Large data sets present significant problems

The AI Startups in Health Coalition lent its voice to the debate with several points, including a discussion of the use of large data sets for algorithm training. The Coalition said a requirement for large data sets fails to recognize that such volumes of data might not be available in a timely manner for algorithms dealing with pandemics. The Coalition also argued that more data points do not necessarily translate into better data, and that a demand for large data sets could needlessly limit competition when smaller data sets will suffice, particularly when small developers lack the resources to acquire these larger data sets.

The Coalition said that the FDA may lack the resources to deal with the massive volume of data incurred when numerous developers have to forward these large data sets to the agency, including in the context of medical device reports (MDRs). Enforcement of such a requirement could amount to little more than a data dump in practical terms, which could prove problematic if the FDA’s own information technology infrastructure is not up to the task of sorting through those data sets.

The Coalition said that while it is not opposed to sharing “appropriate summaries” of product performance, the disclosure of raw or annotated postmarket data with the FDA “would be contrary to existing business practices in virtually every other field.” The data might become public via a Freedom of Information Act request, the letter said, because such disclosures often require a judgment call on the agency’s part. In addition to the risk of competitive damage, such a disclosure could lead to misinformation about and erode trust in the product. The Coalition added that the data dump question is particularly salient in the context of the agency’s stance that over-reporting of adverse events is a violation of the regulations for MDRs, given the prospect that such activity could “mask a signal with noise,” the letter said.

The Coalition made the case for sustaining the current MDR requirements for postmarket surveillance, and that inspections should not be handled virtually, given that in-person inspections would give FDA staff direct access to the data sets behind MDR reports.

Physicians see dangers in autonomous AI

The medical societies, perhaps predictably, saw matters from a different angle, with the American College of Radiology (ACR) and the Radiological Society of North America (RSNA) jointly expressing their concern regarding algorithms used autonomously for image interpretation. The two-society letter said there is little support for the notion that algorithms are generalizable, while there are data indicating that existing algorithms “perform poorly across heterogeneous patient populations.”

When adding in the “broad heterogeneity in imaging equipment and imaging acquisition protocols,” patient safety is difficult to ensure, the two societies said, adding that no autonomous imaging algorithms have made it to market as of the date of the June 30 letter. ACR and RSNA added that the FDA should mandate that algorithms undergo testing and training at multiple clinical sites to ensure “a minimal level of generalizability” across sites and across multiple imaging systems and protocols. “We urge FDA to develop requirements for continuous monitoring of all algorithms used in clinical practice,” they said. The ACR and RSNA said they prefer “more stringent review processes” for any algorithm used autonomously, and that such an intended use could not be managed with special and general controls.

The Medical Imaging & Technology Alliance (MITA) of Arlington, Va., submitted two sets of comments for the docket, including a March 25, 2020, submission that listed four guiding principles for development of AI. These include improved public health and enhanced patient experiences along with avoidance of unnecessary costs. The association’s June 30 correspondence offered a risk framework for AI and ML that poses the question of the qualifications of the user and whether the algorithm would be deployed in a controlled (clinical) vs. non-controlled (residential) setting. MITA also urged the FDA to consider teaming up with other entities to foster development of international consensus standards, such as good database practices, which would ease the load on both the FDA and industry, “while simultaneously raising the bar for quality.”

Existing regulations may suffice

Zack Hornberger, director of cybersecurity and informatics at MITA, told BioWorld that there may not be a need for a novel regulatory framework for FDA’s handling of AI and ML. “It’s important to remember that the FDA regulates a lot of software today,” Hornberger said. He acknowledged that “there are some special considerations when it comes to AI, but the bulk of what you’ll be looking at” from a regulatory perspective is already covered in existing guidance, including on the question of risk.

The term “adaptive” is perhaps more fluid than is always understood, Hornberger said, including scenarios in which an algorithm is responsible for teaching itself outside of oversight. “That isn’t a realistic place to focus” at present, he said, adding that a more appropriate focus is on where science stands at this point. He also said that the notion of autonomy in this context is squishier than might be readily apparent.

“It’s one of those wooly words that can be redefined at whim,” Hornberger said of the notion of autonomy, and thus can be shoehorned into a particular discussion in a manner that might not work well for another consideration.

Regardless of where the FDA lands on AI and ML, Hornberger said there are parts of the world where an imaging system and an algorithm might have to operate without clinician oversight, given the paucity of physicians and radiologists in some regions across the globe. “We do expect to see, from a public health perspective, more weight put on making the technology available to those who otherwise would not have it,” he said, although Hornberger said he is unaware of any specific examples of that scenario at present. Nonetheless, discussions of such situations are topical among a number of organizations, including the World Health Organization. “It’s top of mind in a lot of different places,” he said, and will affect how different jurisdictions see the cost and benefit of these algorithms.

No Comments