It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
When individuals listen to speech, their neural activity phase-locks to the slow temporal rhythm, which is commonly referred to as “neural tracking”. The neural tracking mechanism allows for the detection of an attended sound source in a multi-talker situation by decoding neural signals obtained by electroencephalography (EEG), known as auditory attention decoding (AAD). Neural tracking with AAD can be utilized as an objective measurement tool for diverse clinical contexts, and it has potential to be applied to neuro-steered hearing devices. To effectively utilize this technology, it is essential to enhance the accessibility of EEG experimental setup and analysis. The aim of the study was to develop a cost-efficient neural tracking system and validate the feasibility of neural tracking measurement by conducting an AAD task using an offline and real-time decoder model outside the soundproof environment. We devised a neural tracking system capable of conducting AAD experiments using an OpenBCI and Arduino board. Nine participants were recruited to assess the performance of the AAD using the developed system, which involved presenting competing speech signals in an experiment setting without soundproofing. As a result, the offline decoder model demonstrated an average performance of 90%, and real-time decoder model exhibited a performance of 78%. The present study demonstrates the feasibility of implementing neural tracking and AAD using cost-effective devices in a practical environment.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Hanyang University, Department of HY-KIST Bio-Convergence, Seoul, Korea (GRID:grid.49606.3d) (ISNI:0000 0001 1364 9317); Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Center for Intelligent & Interactive Robotics, Seoul, Korea (GRID:grid.496416.8) (ISNI:0000 0004 5934 6655)
2 Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Center for Intelligent & Interactive Robotics, Seoul, Korea (GRID:grid.496416.8) (ISNI:0000 0004 5934 6655); Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt\ Main, Germany (GRID:grid.461782.e) (ISNI:0000 0004 1795 8610)
3 Hanyang University, Department of HY-KIST Bio-Convergence, Seoul, Korea (GRID:grid.49606.3d) (ISNI:0000 0001 1364 9317); Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Center for Intelligent & Interactive Robotics, Seoul, Korea (GRID:grid.496416.8) (ISNI:0000 0004 5934 6655); Hanyang University, Department of Otolaryngology-Head and Neck Surgery, College of Medicine, Seoul, Korea (GRID:grid.49606.3d) (ISNI:0000 0001 1364 9317); Hanyang University, Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Seoul, Korea (GRID:grid.49606.3d) (ISNI:0000 0001 1364 9317)