Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

EEG signals capture information through multi-channel electrodes and hold promising prospects for human emotion recognition. However, the presence of high levels of noise and the diverse nature of EEG signals pose significant challenges, leading to potential overfitting issues that further complicate the extraction of meaningful information. To address this issue, we propose a Granger causal-based spatial–temporal contrastive learning framework, which significantly enhances the ability to capture EEG signal information by modeling rich spatial–temporal relationships. Specifically, in the spatial dimension, we employ a sampling strategy to select positive sample pairs from individuals watching the same video. Subsequently, a Granger causality test is utilized to enhance graph data and construct potential causality for each channel. Finally, a residual graph convolutional neural network is employed to extract features from EEG signals and compute spatial contrast loss. In the temporal dimension, we first apply a frequency domain noise reduction module for data enhancement on each time series. Then, we introduce the Granger–Former model to capture time domain representation and calculate the time contrast loss. We conduct extensive experiments on two publicly available sentiment recognition datasets (DEAP and SEED), achieving 1.65% improvement of the DEAP dataset and 1.55% improvement of the SEED dataset compared to state-of-the-art unsupervised models. Our method outperforms benchmark methods in terms of prediction accuracy as well as interpretability.

Details

Title
GC-STCL: A Granger Causality-Based Spatial–Temporal Contrastive Learning Framework for EEG Emotion Recognition
Author
Wang, Lei 1 ; Wang, Siming 2 ; Jin, Bo 3 ; Wei, Xiaopeng 4 

 School of Software Technology, Dalian University of Technology, Dalian 116024, China; [email protected] 
 School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; [email protected] 
 School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian 116024, China 
 School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China 
First page
540
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
10994300
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3084781246
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.