In retrospect, it seems inevitable that an algorithm would be appointed to a board of directors. Hong Kong-based Deep Knowledge Ventures named Vital (an acronym for Validating Investment Tool for Advancing Life Sciences) to its board five years ago and credits it with making better decisions than its fellow members, humans all.

Managing partner Dmitry Kaminskiy is deeply indebted to Vital, saying the algorithm helped the company sidestep potentially horrific business decisions, ones that the humans would have blundered into. Vital uses machine learning to analyze financing trends in the databases of life science companies.

The human component is often most noticeable in its absence. Artificial intelligence (AI) is designed to reduce the bias humans bring to the job. When we go awry, well, we're human. But handing decisions, data gathering and privacy concerns to technology can be unnerving to some.

For Norbert Bischofberger, the former Gilead Sciences Inc. R&D executive vice president and chief science officer who now heads Kronos Bio Inc., the challenge was finding where AI fit into the business. He contemplated its addition but ultimately became disenchanted.

"I was thinking about going into R&D and I could not quite see how you would apply AI and machine learning to it," he told BioWorld.

Others have overcome that hurdle and embraced AI. Many of those companies were recently surveyed by the Tufts Center for the Study of Drug Development to discover their most desired outcomes of AI applications. At the top of the list was improving patient safety (73%), followed closely by accelerating decision-making at 69% and even more closely by, at 68%, achieving cost efficiencies and streamlining operating. The remaining goals, in descending order, were improving market access, improving and developing new drugs, and expanding their scope of automation. The biggest challenges were a lack of adequate staff skills, and difficulties in adapting unstructured and insufficient budgets. Plus, 42% felt AI and other new technologies are managed by various departments, with no centralized responsibility for strategy and implementation of AI within their organizations.

"There are people with data and there are people with algorithms," Brandon Allgood, the chief technical officer for the Alliance for Artificial Intelligence and Numerate Inc., told BioWorld. "At Numerate we developed cutting-edge algorithms. Others don't have sophisticated AI but they have a lot of data. How do you bring those two worlds together? That's the current challenge."

Facing down AI and machine learning challenges

One way is locating common ground. Allgood cited a big pharma that deposited millions of compounds behind a firewall at the federal Department of Energy in Livermore, Calif. Going through the DOE and national security process to get access to data is cumbersome, he added.

"The problem is that it's inflexible. You must use the resources at DOE but not what you get off the cloud," he said. "The trade-off between security and ease of use, that's an eternal struggle."

With secure environments, there should be a way to share data and monitor access to it. Amazon, among others, is working on solutions, as are others. But Allgood sees a big problem around data security that is purely digital: "The data is ones and zeroes and once it's out, it's out. If someone has access to it, it could go anywhere. You don't know what's going to happen."

Homomorphic encryption is an option. It allows access to encrypted data without having to decrypt it. Still, Allgood said, some security problems come from outside the U.S. No criminal organization gains anything from stolen patient data or from drug development data, he said, but the security concerns come from state-controlled entities and foreign corporations. Hackers stealing Social Security numbers from electronic health records is not the biggest problem, he added.

In the next five years, every interaction will be driven by a capitated model, he added, changing the challenge not of data consistency but processing all that data at scale, and that's where AI comes in: Data collection must be effective.

"For a disease state, you're trying to lower readmission probabilities, what are the determinants to readmission. We'll figure out your outcomes," Turkeltaub said. "A lot of what we do is with providers. We take a patient population in a disease state, cohorting them and identifying what paths can be taken to reduce that probability for readmission. For clinical research, we help find patients for clinical trials and we do a lot of pharmacovigilance for adverse events. For pharma, we do largely the same stuff."

Chatbots use AI and computer programs to conduct conversations with patients. The bots skew young in the patient demographic that embraces them while often stepping in for an already schedule-heavy primary care physician. Turkeltaub said chatbots can help ease the shortage of primary care doctors from time-consuming yet necessary tasks.

"Patients write a lot of emails and classic messages to doctors and someone has to read all that," Turkeltaub said. "We desegregate those messages and send them to who needs to read it. It saves physicians an enormous amount of time and prevents burnout."

This particular intersection of technology and humans is a happy meeting place. A recent SAS survey found most of the 500 North Americans they surveyed were comfortable with AI technology in health care, more so than other settings such as banking and retail. The lack of human interaction was the biggest pain point, with an older demographic, those over age 40, more uncomfortable with the process than their younger counterparts. Forty-seven percent were comfortable with AI assisting doctors in the operating room. More than 50% of those over 40 said they would go under the knife with the help of technology, compared with only 40% under age 40.

Staffing and maintaining humans, though, will always be a challenge. Finding the right fit for AI is an obstacle. There is a lack of AI specialists in drug discovery and companies cast about for the best candidates. Administrative staff far outnumber AI specialists on most payrolls, with four administrators for every one person dedicated to AI. Lack of skills is the biggest concern and that turns many companies away from adding AI. But the search is on, particularly for data scientists, computer scientists, IT specialists and AI architects.

With, not instead of, humans

Some say AI works best with humans and not as a replacement for them. The legal community may well have a big say about that. Lord Hodge, a judge of the U.K.'s Supreme Court, said trusting AI perhaps should only go so far and then stop. He has considered giving technology a legal personality of its own.

"The law could confer separate legal personality on the machine by registration and require it or its owner to have compulsory insurance to cover its liability to third parties in delict (tort) or restitution. And as a registered person, the machine could own the intellectual property that it created," he said earlier this year while delivering a lecture at the University of Edinburgh.

Some are concerned that they cannot fathom the true reasoning behind AI decisions. The European Commission's policy and investment recommendations for trustworthy AI, published earlier this year, said some laws should be evaluated and perhaps revised to usher the new technology responsibly into daily life. It recommends conducting a systematic mapping and evaluation of all existing EU laws that are particularly relevant to AI systems. Potential challenges include determining the extent that policy and legal objectives underpin legislative provisions affected by AI systems. That includes determining tangible risks to fundamental rights, democracy and the rule of law, and other potential threats to the cultural and socio-technical foundations of democratic, rights-respecting societies.

The impact of AI on the economy will be sweeping, with an estimated 14% effect on global gross domestic product by 2013, with increased productivity as the main driver. The intersection of technology and humans is here to stay and grow. The challenge is to properly integrate the two.

"Removing humans from workflows that only involve the processing of structure data does not mean that humans are obsolete," Eric Colson, the former vice president of data and science at Netflix, wrote in the July Harvard Business Review. "There are many business decisions that depend on more than just structured data. Vision statements, company strategies, corporate values, market dynamics all are examples of information that is only available in our minds and transmitted through culture and other forms of non-digital communication. This information is inaccessible to AI and extremely relevant to business decisions."

The collaboration between technology and humans presents many current problems. Resolution between the two is critical so that future AI and machine learning can bring innovative treatments to patients.

No Comments