LONDON – The future role of artificial intelligence (AI) in health in the U.K. was cemented recently with the announcement of £250 million (US$301 million) in government funding to set up a dedicated AI laboratory that will work to systematically apply the technology and harness the benefits at a national level.

A number of small projects have shown AI is now ready for practical application at scale in the health service, said Simon Stevens, chief executive of the National Health Service (NHS) in England. "In the first instance, it should help personalize [health] screening and treatments for cancer, eye disease and a range of other conditions," Stevens said.

Among the first tasks for the National AI Laboratory are improving diagnosis and screening by speeding up the interpretation of medical images, including mammograms, brain scans and eye scans; developing predictive analytics to better estimate and manage future needs in terms of beds, drugs and devices; and devising analytics to single out those patients who could be more easily treated in the community. Other projects will involve applying AI to identify people at most risk of heart disease or dementia; developing systems to detect patients at risk of post-operative complications and infections; training NHS staff to use AI systems; and automating routine administrative tasks.

Matthew Hancock, U.K. Secretary of State for Health and Social Care, an enthusiastic user of personal digital health apps and cheerleader for AI, said the NHS is "on the cusp of a huge health tech revolution." The technology could transform patient experience, making the NHS, "truly predictive, preventive and personalized," he said.

But while small-scale AI projects carried out in the NHS may be inspiring this vision, they also have attracted controversy, particularly in relation to companies getting access to sensitive, personal health information to develop algorithms they plan to sell for profit.

The most controversial example occurred when it was disclosed the Royal Free Hospital in London had given patient data to the Google AI company Deepmind, as part a joint research project.

In another case, the NHS intended to assume that unless they opted out, people were happy for data held by their general practitioners to be loaded into a national database for use in research and for commercial purposes. The scheme sparked such a backlash that the whole issue of governance of health data was referred to an independent inquiry.

These examples underline the extent to which the ethical problems trailing behind AI, as a whole, are writ large in health care. The same issues of how to ensure the ethical application of AI in health reverberate around Europe. In April, the European Commission published "Ethics guidelines for trustworthy AI" to help companies set the right course on ethics.

The well-rehearsed concerns relating to AI are amplified in health. For a start, the risk of an AI system making the wrong decision is heightened. If AI gets a diagnosis wrong, who is responsible? And while the black box of an algorithm creates difficulties in validating the outputs, it also can obscure inherent biases in the data that was used to train the system. That could reinforce health inequalities.

Then there is the issue of protecting health data, with recent controversies highlighting the need for strict controls over who can access what information and for what purposes.

An ethical approach to AI in health care also is essential in securing public trust, reassuring people that AI systems are reliable and will not be used for malicious purposes.

Business faces up to ethical factors

Companies have become acutely aware of the need to factor ethical considerations into their products. Last month, the New York-based industry group, Partnership for Artificial Intelligence, Telemedicine and Robotics in Healthcare (PATH), published guidelines for developing AI applications, which are intended to assure patients and the public that AI will provide safe, equitable and high-quality services.

Among other measures, the guidelines say the algorithms used should be open to inspection by regulators and that if an AI system causes harm, it should be possible to ascertain why.

The guidelines also hand responsibility to designers of AI technologies to comply with ethical requirements from the ground up, saying safeguards should be built in to ensure patient privacy. At the same time, patients should have the right to access, manage and control the data they generate.

Developing AI systems relies on the ability to collect, store, access and share medical and other health information. The quality of the data gathered needs to be assured if it is to be safely and effectively used, and ethical principles observed.

These aspects raise considerable difficulties for companies in getting access to the validated information they require to adequately power the development of algorithms. These data governance and acquisition challenges are particularly acute in the case of genomic data, where there are implications for relatives who may have different preferences in terms of the use to which data is put.

Who should profit from patient data?

In Europe's largely publicly-funded health care systems there are questions about the value of data, and who should benefit from this resource. A report last month by management consultants EY estimates that in the U.K. curated NHS data could generate as much as £5 billion (US$6.1 billion) per annum, while at the same time delivering £4.6 billion back to the NHS and the economy, through operational savings, better patient outcomes and wider economic benefits.

But at the same time, there is a significant process and technology cost associated with transforming raw medical records into exploitable data sets. Hard-pressed health services in Europe need to partner and allow access to companies, in order to unlock the value.

The view of the general public in the U.K. is that people want to see their health data being used, but they do not want private companies to benefit.

One attempt to square this circle has been to set up trusted intermediaries to act as custodians of data and allow corporate access for projects that have ethical approval.

For example, Sensyne Health plc, of Oxford, U.K., has signed contracts with a number of hospitals, giving it access to patient records, including vital signs, laboratory tests, prescription information, genetic analyses and radiology images.

While Sensyne curates the data, the NHS retains ownership, and no data is sold or transferred to a third party. In return for granting access, the hospital partners have each been given Sensyne equity. They also receive 4% of the company's revenues.

Sensyne then acts as a docking station and provides the AI tools for the analysis of anonymized patient records, with the data held in a 'cold room' that has no outside connections or links to the internet.

A poll commissioned by Sensyne and published in July 2019 highlights how nervous the public is about corporate access to health data.

While more than three quarters of the respondents said they support analysis of anonymized NHS patient data by medical researchers, they had strong views on the kinds of organizations they are comfortable accessing this data.

Only 13% of respondents said they trust multinational tech companies to handle sensitive health information in a confidential manner, while 69% raised concerns about it being analyzed in countries with different laws governing data protection. Only 11% are happy for medical data to be analyzed by companies that do not pay tax in the U.K.

The poll highlights how important it is to ensure AI is developed and used in an ethical way, that is transparent and compatible with the public interest.

Recognizing this, the U.K.'s National AI Laboratory in health will have a duty to inspect algorithms in use by the NHS, to increase safety, show they operate impartially and protect patient confidentiality.

It is hoped this ethically assured framework will stimulate and drive innovation across many applications of AI in health.

No Comments