A deep learning model based on voice data from crowd-sourced audio samples detects COVID-19 cases by analyzing subtle vocal changes.

At an international congress of the European Respiratory Society in Barcelona, Spain, held from 4th–6th September, researchers presented a study that uses deep learning to identify COVID-19 patients using voice data. The research findings were posted on a preprint server online and have not been peer-reviewed.

The AI model developed in this study is more accurate than lateral flow or rapid antigen tests. It is cheap, quick, and simple to use, indicating its usefulness in low-income countries where PCR tests may be costly and/or challenging to distribute.

According to Ms. Wafaa Aljbawi, a researcher at the Institute of Data Science at Maastricht University in the Netherlands, the AI model was accurate 89% of the time, whereas the accuracy of lateral flow tests varied greatly depending on the brand. In addition, lateral flow tests were significantly less accurate in detecting COVID infection in people who had no symptoms. 

Voice recordings and AI algorithms have the potential to achieve high precision in determining which patients are infected with COVID-19. Such tests are freely available and simple to interpret. They also allow for remote, virtual testing and have a turnaround time of less than a minute.

The infection caused by COVID-19 generally affects the upper respiratory tract and vocal cords, resulting in a change in a person’s voice. The researchers went on to investigate if it was possible to use AI to analyze voices in order to detect COVID-19.

Researchers obtained the data from the University of Cambridge’s crowd-sourced COVID-19 Sounds App to study the effects of COVID-19 on respiratory sounds. The data consisted of 893 audio samples from 4,352 healthy and non-healthy participants, of which 308 were tested as COVID-19 positive. To record the respiratory sounds, the app was installed on the participant’s mobile phones, and they were asked to provide a few basic information about demographics, medical history, and smoking status and then are asked to record some respiratory sounds of coughing, breathing deeply, and reading sentences.

The researchers used a voice analysis technique called Mel-spectrogram analysis to identify different voice features such as loudness, power, and variation over time. 

In this way, many properties of the participants’ voices were decomposed. Different AI models were built to distinguish the voice of COVID-19 patients from those who did not have the disease. The model was then evaluated to determine which model best classified the COVID-19 cases.

The researchers found that a model called Long Short-Term Memory (LSTM) outperformed other models. LSTMs are based on neural networks that work similarly to the human brain and recognize key relationships in the data. Because the data can be stored in memory, it is ideal for modeling signals acquired over time, such as speech.

The model was able to detect positive cases with 89% accuracy and identify negative cases with 83% accuracy. 

According to Ms. Aljbawi, compared to state-of-the-art tests such as the lateral flow test, the findings indicate a significant improvement in diagnosing COVID-19. Despite having only 56% sensitivity, the lateral flow test has a 99.7% specificity. It shows lateral flow tests misclassify infected people as negative for COVID-19 more often than the AI LSTM model. Thus, the AI LSTM model may miss 11 out of every 100 infections, while the lateral flow test may misinterpret 44 out of 100 infections. 

A lateral flow test has a high specificity, such that only one in 100 people would be incorrectly diagnosed as COVID-19 positive when they weren’t infected, while the LSTM test would wrongly diagnose 17 in 100 non-infected individuals as positive. The LSTM tests, however, are virtually free, and it is possible to invite people to PCR tests if the LSTM tests are positive.

According to the researchers, large numbers of participants are needed to validate their results. As of now, 53,449 audio samples from 36,116 participants have been collected for this project, allowing it to be improved and validated. In addition, they are conducting further analysis to determine which parameters in the voice are influencing the AI model.

Story Source: Article Reference

Learn More:

Top Bioinformatics Books

Learn more to get deeper insights into the field of bioinformatics.

Top Free Online Bioinformatics Courses ↗

Freely available courses to learn each and every aspect of bioinformatics.

Latest Bioinformatics Breakthroughs

Stay updated with the latest discoveries in the field of bioinformatics.

Website | + posts

Dr. Tamanna Anwar is a Scientist and Co-founder of the Centre of Bioinformatics Research and Technology (CBIRT). She is a passionate bioinformatics scientist and a visionary entrepreneur. Dr. Tamanna has worked as a Young Scientist at Jawaharlal Nehru University, New Delhi. She has also worked as a Postdoctoral Fellow at the University of Saskatchewan, Canada. She has several scientific research publications in high-impact research journals. Her latest endeavor is the development of a platform that acts as a one-stop solution for all bioinformatics related information as well as developing a bioinformatics news portal to report cutting-edge bioinformatics breakthroughs.

LEAVE A REPLY

Please enter your comment!
Please enter your name here