Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

One’s own voice undergoes unique processing that distinguishes it from others’ voices, and thus listening to it may have a special neural basis for self-talk as an emotion regulation strategy. This study aimed to elucidate how neural effects of one’s own voice differ from those of others’ voices on the implementation of emotion regulation strategies. Twenty-one healthy adults were scanned using fMRI while listening to sentences synthesized in their own or others’ voices for self-affirmation and cognitive defusion, which were based on mental commitments to strengthen one’s positive aspects and imagining metaphoric actions to shake off negative aspects, respectively. The interaction effect between voice identity and strategy was observed in the superior temporal sulcus, middle temporal gyrus, and parahippocampal cortex, and activity in these regions showed that the uniqueness of one’s own voice is reflected more strongly for cognitive defusion than for self-affirmation. This interaction was also seen in the precuneus, suggesting intertwining of self-referential processing and episodic memory retrieval in self-affirmation with one’s own voice. These results imply that unique effects of one’s own voice may be expressed differently due to the degree of engagement of neural sharpening-related regions and self-referential networks depending on the type of emotion regulation.

Details

Title
Neural Effects of One’s Own Voice on Self-Talk for Emotion Regulation
Author
Hye-jeong, Jo 1 ; Park, Chanmi 2 ; Lee, Eunyoung 2 ; Lee, Jee Hang 3 ; Kim, Jinwoo 2 ; Han, Sujin 4 ; Kim, Joohan 4 ; Kim, Eun Joo 5 ; Eosu Kim 6 ; Jae-Jin, Kim 7   VIAFID ORCID Logo 

 Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; [email protected] (H.-j.J.); [email protected] (E.K.) 
 HCI Lab, Cognitive Science, Yonsei University, Seoul 03722, Republic of Korea; [email protected] (C.P.); [email protected] (E.L.); [email protected] (J.K.) 
 Department of Human-Centered Artificial Intelligence, Sangmyung University, Seoul 03016, Republic of Korea; [email protected] 
 Department of Communication, Yonsei University, Seoul 03722, Republic of Korea; [email protected] (S.H.); [email protected] (J.K.) 
 Graduate School of Education, Yonsei University, Seoul 03722, Republic of Korea; [email protected] 
 Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; [email protected] (H.-j.J.); [email protected] (E.K.); Department of Psychiatry and Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul 03722, Republic of Korea 
 Department of Psychiatry and Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul 03722, Republic of Korea 
First page
637
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20763425
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3084739097
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.