As cases skyrocket and more questions arise about the high rate of false negatives returned by rapid testing in COVID-19 cases, a simple, new diagnostic test offers far greater accuracy. Researchers around the globe have found that artificial intelligence can detect coronavirus infections in recorded forced coughs, with nearly 100% accuracy in asymptomatic or presymptomatic cases, making it an ideal quick screening test. Forced cough or voice analysis shows strong results in other conditions, too, including pulmonary hypertension and, surprisingly, Alzheimer's disease.

Researchers at the Massachusetts Institute of Technology (MIT) in Cambridge, Mass.; Vocalis Health Inc., based in Newton, Mass. and Haifa, Israel; and Cambridge University in the U.K. have all launched projects to use voice or cough samples in COVID-19 diagnosis.

So far, the MIT group appears to be winning the race. In a paper recently published in the IEEE Journal of Engineering in Medicine and Biology, the team revealed that its Open Voice model very accurately identifies people infected with SARS-CoV-2 just from recordings of their coughs. For individuals with COVID-19 symptoms, the model had a sensitivity of 98.5% and a specificity of 94.2% when compared to the gold standard RT-PCR results, but for asymptomatic individuals, it achieved sensitivity of 100% and specificity of 83.2%.

While the test reliably detects infection in people with symptoms, the authors recommend using it primarily in those without symptoms. In part that’s because the test is slightly less accurate in a symptomatic population and in part that’s because “one should go directly to a doctor and get proper medical advice,” lead author Brian Subirana, a research scientist in MIT’s Auto-ID Laboratory, told BioWorld.

The team is developing an app based on the model to provide a non-invasive, real-time, instantly distributable COVID-19 screening tool that could be used in place of less reliable rapid tests for screening of asymptomatic students, mass transit riders, attendees at large gatherings, employees and store customers.

Training the artificial ear

The human ear does not detect differences in the cough produced by someone with COVID-19 and someone without an infection, but the convolutional neural network underlying the model does.

“The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs,” said Subirana. “This means that when you talk, part of your talking is like coughing, and vice versa. It also means that things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state.”

The cough alone has proved quite adept at diagnosing COVID-19, but additional information from a reading or other voice recording could improve it further. “Our research focused on just coughs to see how far we could go,” Subirana noted.

The current level of accuracy arises largely from the use of a massive database. The lab had been working on an AI to detect Alzheimer’s disease from forced cough recordings when the pandemic overtook other research concerns. In addition to dementia, Alzheimer’s is associated with neuromuscular degradation, which weakens the vocal cords, and a flattening of affect evident in speech.

By this spring, the team had trained the neural network on an audiobook dataset with more than 1,000 hours of speech plus hours of recordings of actors conveying a range of emotions and a database of 10,000 coughs. Early results in Alzheimer’s showed strong predictive ability, with an area under the curve approaching 0.9.

The team gathered an additional 70,000 recordings totaling 200,000 forced-cough samples through May. Of those, about 2,500 can confirmed cases of COVID-19. Using those 2,500 positive samples and another 2,500 randomly selected, they trained the AI on 4,000 recordings and validated it on the remaining 1,000.

Other voice recording applications

Vocalis Health has developed a similar application using voice recordings of people counting from 50 to 70. They also answer a questionnaire on their symptoms. The Israeli Ministry of Defense has been using the technology since April to detect COVID-19 infections among its ranks.

“Current testing and screening methods are insufficient, resource-intensive and indiscriminately applied,” said Shady Hassan, co-founder, chief operating and chief medical officer at Vocalis Health. “We developed a COVID-19 vocal biomarker since it is a highly scalable and non-invasive screening tool that can help fill this gap by extending the reach and reliability of screening and better informing and guiding testing.”

The company’s AI has also been used in collaboration with the Mayo Clinic to identify pulmonary hypertension, in which it demonstrated a correlation coefficient of 0.829.

Both applications convert audio recordings to visual images processed by computer vision techniques to identify small changes in the spectrogram that are then used by the algorithm to generate a unique signature for each disease and each patient.

“The human vocal system includes the lungs and the lower airways that provide air supply, the vocal folds modulate the airflow through vibration and produces the voice source, that is modified by the tongue and mouth,” Hassan explained. “Both COVID-19 and pulmonary hypertension affect these organs and thus they can be detected through the voice.”

The company noted that the voice recordings offer a new avenue for patient diagnosis that may be particularly valuable for telehealth applications and remote disease monitoring.