CLEVELAND – What are some of the biggest challenges related using to artificial intelligence (AI) in health care? A panel of experts tackled that question during a session Tuesday during the 2019 Medical Innovations Summit, while also discussing what their organizations have done in this space to advance patient care.

"Let's bring it down to what's really going on within your institutions, because I actually think there's a lot of hype, and then there's the practical applications," said moderator Will Morris, executive medical director at the Cleveland Clinic, in kicking off the panel.

Marc Probst, chief information officer at Intermountain Healthcare, of Salt Lake City, said his organization has used AI at a level for decision support for about 30 years "it just hasn't been as complex and as intricate as it is today."

To that end, the organization is looking to further harness the power of AI, particularly with a project with Stanford University on computer vision "training the actual system to see." It's a slow, but exciting process, he noted, with implications for patient monitoring and documenting health records.

Max Air example

Turning to Alistair Erskine, chief digital health officer at Partners Healthcare in Boston, Morris threw out what he called a curveball by highlighting a recent article in The New York Times Magazine about Boeing's Max Air and the accidents that have resulted in many fatalities.

In the article, "[t]here [were] points around airworthiness, and what they called . . . airworthiness of the actual captain – so, the ability to understand and read all the data independent of a computer. And there was concern that the AI, or the augmented flight control systems had eroded that and thus paralyzed the brain, resulting in the traged[ies]," Morris noted.

Against this backdrop, he asked about approaching AI with a sense of trepidation, with an eye toward ensuring patient and clinical safety. "What's that fine line where, 'Well, the computer told me to do it, or I'm actually using my . . . brain?'"

Erskine responded that his organization does not have a set of physicians who serve as consultants to provide guidance on the appropriate moment to use AI tool.

"In the future, that likely will be an area that the medical field will pivot to, of . . . fill[ing] in that gap that exists," he continued. "At Partners Healthcare, one thing that we've done to be able to address what you're describing specifically is basically a virtual rapid response team for health safety."

Still, as the shift continues toward AI, it is important to be mindful of the potential of safety issues that could emerge, as it is impossible to always predict how this technology will affect both patients and clinicians.

There are some applications that are less risky, "but I think that the other types of applications that are more around decision support, we need to know what the decision tree actually is to be able to work our way back. I think that may be a different set of applications and requires a more robust set of safety services that go around it."

Black box

Morris asked the panelists about their thoughts about trusting the so-called AI black box, continuing the airplane metaphor in relying implicitly on these tools. "Do you have an obligation to understand how it works . . . do you trust it implicitly?"

Sara Vaezy, chief of digital strategy at Providence St. Joseph Health, based in Renton, Wash., said her organization is not comfortable with the "black box mentality."

Rich Zane, chief innovation officer at Denver's UCHealth, was of the same mind. "Where we've been careful is to make sure that we have a certain level of human adjudication in every single decision that happens. So, the AI, or the clinical decision support, or whatever we're going to call it, helps us make better decisions, but does not make decisions for us, and I think that's an important point."

What's going to be impactful?

Morris wrapped up the panel by asking about something the panelists had seen related to AI that they thought will have a real impact on care. "I think the coolest thing I've seen is an AI algorithm that looks at a leukemia patient's genome, compares them and their clinical outcomes to every other patient. And almost overnight, we've reduced by 22% the mortality for first chemotherapeutic exposure in leukemia," said Zane.

For her part, Vaezy highlighted the problem of physician burnout as where AI can be particularly helpful. "We've seen ambient listening virtual assistants that all but eliminate the need for providers to be charting outside the visit. And when we look at the time that providers spend in the chart, it goes to zero in after hours and is reduced by 30% in every hour of the day."

An application Erskine viewed as powerful is one that would examine what happened to other patients who went through various courses of treatment, and what would be the next best step for a particular individual. "And I think we're right at the edge of being able to do that. There [is] a significant number of privacy laws and other modernizations that need to occur for that to be able to go full swing." However, he is looking forward to that helping with the mountains of data facilities now face, with an eye toward improving care.

Ed Marx, chief information officer at the Cleveland Clinic, wrapped up by highlighting an AI innovation dealing with radiation therapy. "Traditionally, you have a tumor, and you provide the same amount of radiation across the whole tumor, and then you hope for the best outcome." To improve these odds, Cleveland Clinic researchers are using more focused AI and machine learning, resulting in a dramatically reduced failure rate – from 25% to 5%.