Abstract

When individuals listen to speech, their neural activity phase-locks to the slow temporal rhythm, which is commonly referred to as “neural tracking”. The neural tracking mechanism allows for the detection of an attended sound source in a multi-talker situation by decoding neural signals obtained by electroencephalography (EEG), known as auditory attention decoding (AAD). Neural tracking with AAD can be utilized as an objective measurement tool for diverse clinical contexts, and it has potential to be applied to neuro-steered hearing devices. To effectively utilize this technology, it is essential to enhance the accessibility of EEG experimental setup and analysis. The aim of the study was to develop a cost-efficient neural tracking system and validate the feasibility of neural tracking measurement by conducting an AAD task using an offline and real-time decoder model outside the soundproof environment. We devised a neural tracking system capable of conducting AAD experiments using an OpenBCI and Arduino board. Nine participants were recruited to assess the performance of the AAD using the developed system, which involved presenting competing speech signals in an experiment setting without soundproofing. As a result, the offline decoder model demonstrated an average performance of 90%, and real-time decoder model exhibited a performance of 78%. The present study demonstrates the feasibility of implementing neural tracking and AAD using cost-effective devices in a practical environment.

Details

Title
Validation of cost-efficient EEG experimental setup for neural tracking in an auditory attention task
Author
Ha, Jiyeon 1 ; Baek, Seung-Cheol 2 ; Lim, Yoonseob 1 ; Chung, Jae Ho 3 

 Hanyang University, Department of HY-KIST Bio-Convergence, Seoul, Korea (GRID:grid.49606.3d) (ISNI:0000 0001 1364 9317); Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Center for Intelligent & Interactive Robotics, Seoul, Korea (GRID:grid.496416.8) (ISNI:0000 0004 5934 6655) 
 Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Center for Intelligent & Interactive Robotics, Seoul, Korea (GRID:grid.496416.8) (ISNI:0000 0004 5934 6655); Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt\ Main, Germany (GRID:grid.461782.e) (ISNI:0000 0004 1795 8610) 
 Hanyang University, Department of HY-KIST Bio-Convergence, Seoul, Korea (GRID:grid.49606.3d) (ISNI:0000 0001 1364 9317); Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Center for Intelligent & Interactive Robotics, Seoul, Korea (GRID:grid.496416.8) (ISNI:0000 0004 5934 6655); Hanyang University, Department of Otolaryngology-Head and Neck Surgery, College of Medicine, Seoul, Korea (GRID:grid.49606.3d) (ISNI:0000 0001 1364 9317); Hanyang University, Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Seoul, Korea (GRID:grid.49606.3d) (ISNI:0000 0001 1364 9317) 
Pages
22682
Publication year
2023
Publication date
2023
Publisher
Nature Publishing Group
e-ISSN
20452322
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2903739673
Copyright
© The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.