The FDA's regulation of artificial intelligence (AI) is divided by product center for reasons that are obvious, but precisely what that regulation will look like is anything but. As the FDA's Center for Devices and Radiological Health (CDRH) goes through the comment period for its discussion draft for AI, other nations are starting their own efforts in this space. The American agency's efforts may foreshadow the approaches employed in other nations.

The FDA's pursuit of a regulatory framework for AI and the complex algorithms that fall into the AI category, such as machine learning (ML) algorithms, is operating in the background of the CDRH pilot program for software as a medical device (SaMD). There is of course some degree of overlap between the regulatory needs of these two classes of products, and the agency will require new statutory authorities to regulate these classes of products. But how distinct must these statutory authorities be? And when will they be made available to the FDA so it can promulgate rules and regulations pursuant to those authorities?

The FDA unveiled its discussion draft for AI in April 2019, explaining that its traditional premarket tools would likely stifle advances in the field, thus necessitating a novel approach. The 20-page paper noted that the agency's approach will rely to some extent on documents related to SaMD by the International Medical Device Regulators Forum (IMDRF), and noted that some modifications to AI applications would not require premarket review.

However, modifications that would require review include those that are driven by changes to device performance due to retraining with new datasets, even if the patient population and the input signal are unchanged. Modifications to inputs with no change in intended use would likewise trigger a regulatory review under this approach, while a change in intended use would require a regulatory re-examination. The paper cited the need for good machine learning practices (GMLPs) as a method for ensuring proper device function throughout the total product life cycle.

Scott Gottlieb, former FDA commissioner, said in April that the agency intends to issue a draft guidance based on feedback received on the discussion draft. The FDA will apply its current statutory authorities "in new ways to keep up with the rapid pace of innovation and ensure the safety of these devices." The seeming disconnect between that statement by Gottlieb and other statements by the agency regarding its statutory authorities, or lack thereof, has never been directly addressed by the agency.

Some observers have serious misgivings as to whether the FDA's existing statutory authorities are adequate to the task of regulating AI, essentially the same dilemma as the agency faces in connection with its SaMD pre-cert program. In April, regulatory attorney Brad Thompson, of Epstein Becker & Green PC, told BioWorld MedTech the FDA has at best dismal prospects of converting the AI paper into a functioning regulatory regime absent legislative help from Congress. Thompson said he was generally supportive of the AI discussion draft, but that he had misgivings as to whether the agency has the statutory authority to require that vendors make continuous improvements in their AI algorithms as spelled out in the FDA paper.

Thompson also said the FDA's Digital Center of Excellence might not be in a position to ease some of the boundaries between product centers where AI regulation is concerned. He also described the regulatory path as in the discussion draft as "not for the faint of heart."

FDA hopes: one bill, two new regulatory authorities

There are a number of intertwining issues with AI regulation at the FDA's device center, and coordination of statutory authorizations for AI and SaMD regulation is one of the more prominent. The associate director for digital health at the FDA's device center, Bakul Patel, told BioWorld MedTech that there is a substantial amount of overlap between the authorities needed for SaMD and AI, and that the agency's hope is that any authorizing legislation will explicitly handle both.

Congress typically passes legislation for the various FDA user fee programs in a single package, and the current user fee agreement for medical devices, known as MDUFA IV, expires Sept. 30, 2022. The authorizing legislation for the next user fee schedule will have to be in process in the U.S. House and Senate by 2022, but the SaMD pre-cert pilot will extend to the end of the current calendar year, with no clear date in sight for the emergence of a fully formed SaMD proposal. Whether the FDA can arrive at a legislative proposal for both SaMD and AI in time for the next user fee authorization remains to be seen, but the preference for dual-purpose legislation would seem to put the FDA in a fairly tight bind for AI.

Clinicians who would use any AI products in clinical care have demonstrated an interest in the algorithms, but Patel was reluctant to discuss to what extent the agency sees the algorithms as trade secrets that cannot be unilaterally disclosed by the agency.

Patel said the FDA will collaborate with other regulatory agencies on AI, but that the emphasis will be on development of overarching principles. He said the International Medical Device Regulators Forum's clinical evaluation guidance for SaMD reflected a need to deal with the fact that each regulator is operating under a more or less unique statutory authority, a consideration that will be applied to any such collaboration on AI. He said the approach to the FDA's collaboration with IMDRF will reflect the understanding that "if everyone started with the same playbook and tailored it for their regulations and their laws, we would get harmonization." However, he was unable to discuss in any detail how inspections of AI vendors would be conducted, although a "desktop" inspection conducted remotely is one of the possibilities, depending on whether the inspection was a routine inspection or one triggered by adverse event reporting.

Meanwhile, in the rest of the world efforts vary

Other nations have determined there is a need to get involved in AI for health care and other purposes, but to date, few of these efforts deal directly with regulation of AI in medical technology. One exception arose recently when China's National Medical Products Agency (NMPA) released a guideline for its review of software that employs machine learning when used to aid physicians in making a diagnosis and/or treatment decisions. One of the requirements imposed by NMPA is that the source data must be drawn from third-party databases to ensure that the input data are both accurate and standardized. NMPA further stipulated that any changes to the algorithm will be sufficient to trigger a new regulatory review.

In contrast, the U.K. government's August 2019 announcement regarding AI is directed toward a bolus of funding for a national laboratory that will focus on the use of AI in health care. The funds of $303 million will be applied to methods of screening for heart disease and cancer. Simon Stevens, the CEO of NHS England, said the funds will "help the NHS become a world leader in these important technologies." As yet, the U.K. Medicines and Health Care Products Regulatory Agency has not posted any policy statements or draft guidance regarding the use of AI in medical technology.

Health Canada has commenced with an examination of AI in medical devices via a collaboration with the Canadian Institute of Health Research that will yield a paper directed toward regulatory considerations. It is not clear yet, however, when Health Canada expects to draft a guidance on this subject. (See the related articles in this edition on AI in Australia, China, India, Japan and South Korea.)

No Comments