The U.S. FDA’s approach to bias covers a large swath of territory, including the potential for bias to creep invisibly into artificial intelligence (AI) products. Yarmela Pavlovic, vice president for global regulatory affairs at Medtronic plc, said at this year’s Artificial Intelligence Summit that regulators may be more wary of the potential hazards of bias in AI compared to non-AI software simply because of the difficulty in anticipating how bias might affect the function of these advanced algorithms.
Lunit Inc. is the latest South Korean firm to gain the U.S. FDA’s 510(k) clearance for Lunit Insight DBT, its artificial intelligence (AI)-powered breast cancer diagnostic tool that analyzes digital breast tomosynthesis (DBT) images, boosting its efforts to enter the U.S. market. The company also reported that it secured $150 million in a public offering.
The artificial intelligence (AI) space doesn’t exactly lack for stakeholders, but the roster of stakeholders in the U.S. is poised to grow by hundreds of millions, according to Laura Adams, senior advisor at the U.S. National Academy of Medicine.
In the following years, 2023 may come to be seen in medical device circles as the year of artificial intelligence (AI), but that doesn’t mean that 2023 will be seen as the year of regulatory clarity for AI.
Neurophet Inc. pulled in ₩20 billion (US$15.1 million) in its series C funding round, helping roll out its AI-software suite for neurodegenerative diseases worldwide and prep its IPO on the Kosdaq which is scheduled to take place sometime in 2024.
A number of recent developments in artificial intelligence (AI) have sent some reassurance that these algorithms will not hit the market completely devoid of regulation, but a Nov. 8 hearing in the U.S. Senate makes clear that Capitol Hill is intent on legislating on AI, even if only belatedly.
Companies developing Artificial intelligence (AI)-enabled solutions have agreed to work with governments to test models both pre- and post-deployment, in a bid to manage the risks around security, safety and societal harms. The landmark agreement was reached at the first AI Safety Summit, held at Bletchley Park, in the U.K.
As a follow-up to the Biden administration’s executive order for artificial intelligence (AI), the U.S. Office of Management and Budget (OMB) has promulgated a memorandum directing federal government agency use of AI.
The U.K. Medicines and Healthcare Products Regulatory Agency (MHRA) has converted its regulatory sandbox for artificial intelligence (AI) into a full-fledged program dubbed the AI-Airlock, described as a regulatory-monitored virtual area in which industry can “generate robust evidence for their advanced technologies.” MHRA said it is focused on ensuring that AI products are available in the U.K. “before they are available anywhere else in the world,” a sign that national economic competitiveness is fostering a regulatory willingness to deal with uncertainty about this class of products.
The U.K. government has launched a £100 million (US$122 million) fund that will accelerate the development of artificial intelligence (AI) tools to help tackle some of the biggest challenges in health care.