Full text

Turn on search term navigation

Copyright © 2024 Shuang Ran et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/

Abstract

Music is an important way for emotion expression, and traditional manual composition requires a solid knowledge of music theory. It is needed to find a simple but accurate method to express personal emotions in music creation. In this paper, we propose and implement an EEG signal-driven real-time emotional music generation system for generating exclusive emotional music. To achieve real-time emotion recognition, the proposed system can obtain the model suitable for a newcomer quickly through short-time calibration. And then, both the recognized emotion state and music structure features are fed into the network as the conditional inputs to generate exclusive music which is consistent with the user’s real emotional expression. In the real-time emotion recognition module, we propose an optimized style transfer mapping algorithm based on simplified parameter optimization and introduce the strategy of instance selection into the proposed method. The module can obtain and calibrate a suitable model for a new user in short-time, which achieves the purpose of real-time emotion recognition. The accuracies have been improved to 86.78% and 77.68%, and the computing time is just to 7 s and 10 s on the public SEED and self-collected datasets, respectively. In the music generation module, we propose an emotional music generation network based on structure features and embed it into our system, which breaks the limitation of the existing systems by calling third-party software and realizes the controllability of the consistency of generated music with the actual one in emotional expression. The experimental results show that the proposed system can generate fluent, complete, and exclusive music consistent with the user’s real-time emotion recognition results.

Details

Title
Mind to Music: An EEG Signal-Driven Real-Time Emotional Music Generation System
Author
Ran, Shuang 1   VIAFID ORCID Logo  ; Zhong, Wei 2   VIAFID ORCID Logo  ; Lin, Ma 1   VIAFID ORCID Logo  ; Duan, Danting 1   VIAFID ORCID Logo  ; Long, Ye 2   VIAFID ORCID Logo  ; Zhang, Qin 2   VIAFID ORCID Logo 

 Key Laboratory of Media Audio & Video Communication University of China Beijing 100024 China 
 State Key Laboratory of Media Convergence and Communication Communication University of China Beijing 100024 China 
Editor
Alexander Hošovský
Publication year
2024
Publication date
2024
Publisher
John Wiley & Sons, Inc.
ISSN
08848173
e-ISSN
1098111X
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3151685823
Copyright
Copyright © 2024 Shuang Ran et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/