Artificial intelligence (AI)-focused Caption Health Inc. has scored a green light from the U.S. FDA for an updated version of Caption Interpretation, which aims to help clinicians gain quick, easy and accurate measurements of cardiac ejection fraction (EF) at the point of care.
The company makes Caption AI, which helps medical professionals perform ultrasound exams – regardless of their experience – via deep learning and AI. Typically, it takes years to master ultrasound, restricting access to this diagnostic tool.
The Caption AI platform includes two components: Caption Guidance and Caption Interpretation. The former received a thumbs up from the U.S. FDA earlier this year via the de novo pathway and is the first medical software authorized by the agency to provide real-time AI guidance for medical imaging acquisition, the company noted.
Caption Guidance emulates the expertise of a sonographer by providing more than 90 types of real-time instructions and feedback. These visual prompts direct users to make specific transducer movements to optimize and capture a diagnostic-quality image. Other ultrasound systems require years of expertise to recognize anatomical structures and make fine movements, limiting access to clinicians with specialized training.
“I think one of the really amazing things about AI and [what] explains our regulatory clearances this year is that with additional data and some additional progress, you can make pretty significant improvements to the … algorithms fairly quickly,” Sam Surette, head of RA & QA at the company, told BioWorld. The company remained busy, making progress with its algorithms and software while its pivotal study was being conducted and the FDA was reviewing its submission.
As a result, when the company received the de novo clearance, it was ready to go with a version of the Caption AI platform that was optimized to the point of care. That was at the same time the COVID-19 pandemic was becoming an increasingly significant problem.
“And so, the FDA viewed the changes to Caption AI that really made it more usable at the point of care to be very important. [T]hey expedited the review of this update. So, the update had some more flexibility in workflow that was better for point-of-care settings and has better algorithms and better … instruction for the user.” The review took a mere 25 days, Surette added.
As a result, the company noted in May that the FDA had cleared an updated version of Caption Guidance.
Update to Caption Interpretation
Caption Interpretation was originally cleared in 2018. Now, with this update, it automatically calculates EF, which is a widely used calculation of cardiac function. Specifically, it applies end-to-end deep learning to choose the best clips from ultrasound exams automatically, perform quality assurance and produce an accurate EF measurement. It incorporates three ultrasound views into its calculation: apical 4-chamber, apical 2-chamber and the readily obtained parasternal long-axis (PLAX) view.
The update is directed to the point of care, especially those helping during the COVID-19 pandemic, Surette noted. “It has much more improved algorithms, so the performance of the core algorithms that estimate EF are significantly better. And then, I think, really critically, is that it can incorporate any combination of three views.”
The previous version had limitations, particularly in obese patients or in those who smoked, and even in emergency situations. “And so, what we did was we developed the deep learning technology to estimate EF not only from those views, but from a third view, which is actually the easiest one to get in those settings. And they can estimate EF from all three, two of the ones that are available, or even a single view if the clinician is unable to get all three views in their exam.”
Roberto Lang, professor of medicine and radiology and director of noninvasive cardiac imaging laboratories at the University of Chicago Medicine and past president of the American Society of Echocardiography, heralded this latest clearance. "Developing artificial intelligence that mimics an expert physician's eye with comparable accuracy to automatically calculate EF – including from the PLAX view, which has never been done before – is a major breakthrough," he noted.
The good news for the Brisbane, Calif.-based company comes roughly a week after it reported the close of its series B funding round of $53 million earmarked for developing and commercializing its AI-guided ultrasound technology.