Full text

Turn on search term navigation

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Sleep stage detection from polysomnography (PSG) recordings is a widely used method of monitoring sleep quality. Despite significant progress in the development of machine-learning (ML)-based and deep-learning (DL)-based automatic sleep stage detection schemes focusing on single-channel PSG data, such as single-channel electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG), developing a standard model is still an active subject of research. Often, the use of a single source of information suffers from data inefficiency and data-skewed problems. Instead, a multi-channel input-based classifier can mitigate the aforementioned challenges and achieve better performance. However, it requires extensive computational resources to train the model, and, hence, a tradeoff between performance and computational resources cannot be ignored. In this article, we aim to introduce a multi-channel, more specifically a four-channel, convolutional bidirectional long short-term memory (Bi-LSTM) network that can effectively exploit spatiotemporal features of data collected from multiple channels of the PSG recording (e.g., EEG Fpz-Cz, EEG Pz-Oz, EOG, and EMG) for automatic sleep stage detection. First, a dual-channel convolutional Bi-LSTM network module has been designed and pre-trained utilizing data from every two distinct channels of the PSG recording. Subsequently, we have leveraged the concept of transfer learning circuitously and have fused two dual-channel convolutional Bi-LSTM network modules to detect sleep stages. In the dual-channel convolutional Bi-LSTM module, a two-layer convolutional neural network has been utilized to extract spatial features from two channels of the PSG recordings. These extracted spatial features are subsequently coupled and given as input at every level of the Bi-LSTM network to extract and learn rich temporal correlated features. Both Sleep EDF-20 and Sleep EDF-78 (expanded version of Sleep EDF-20) datasets are used in this study to evaluate the result. The model that includes an EEG Fpz-Cz + EOG module and an EEG Fpz-Cz + EMG module can classify sleep stage with the highest value of accuracy (ACC), Kappa (Kp), and F1 score (e.g., 91.44%, 0.89, and 88.69%, respectively) on the Sleep EDF-20 dataset. On the other hand, the model consisting of an EEG Fpz-Cz + EMG module and an EEG Pz-Oz + EOG module shows the best performance (e.g., the value of ACC, Kp, and F1 score are 90.21%, 0.86, and 87.02%, respectively) compared to other combinations for the Sleep EDF-78 dataset. In addition, a comparative study with respect to other existing literature has been provided and discussed in order to exhibit the efficacy of our proposed model.

Details

Title
An End-to-End Multi-Channel Convolutional Bi-LSTM Network for Automatic Sleep Stage Detection
Author
Tabassum Islam Toma  VIAFID ORCID Logo  ; Choi, Sunwoong  VIAFID ORCID Logo 
First page
4950
Publication year
2023
Publication date
2023
Publisher
MDPI AG
e-ISSN
14248220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2819457311
Copyright
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.