Abstract

Predictive coding theory suggests the brain anticipates sensory information using prior knowledge. While this theory has been extensively researched within individual sensory modalities, evidence for predictive processing across sensory modalities is limited. Here, we examine how crossmodal knowledge is represented and learned in the brain, by identifying the hierarchical networks underlying crossmodal predictions when information of one sensory modality leads to a prediction in another modality. We record electroencephalogram (EEG) during a crossmodal audiovisual local-global oddball paradigm, in which the predictability of transitions between tones and images are manipulated at both the stimulus and sequence levels. To dissect the complex predictive signals in our EEG data, we employed a model-fitting approach to untangle neural interactions across modalities and hierarchies. The model-fitting result demonstrates that audiovisual integration occurs at both the levels of individual stimulus interactions and multi-stimulus sequences. Furthermore, we identify the spatio-spectro-temporal signatures of prediction-error signals across hierarchies and modalities, and reveal that auditory and visual prediction errors are rapidly redirected to the central-parietal electrodes during learning through alpha-band interactions. Our study suggests a crossmodal predictive coding mechanism where unimodal predictions are processed by distributed brain networks to form crossmodal knowledge.

A generalized framework for predictive coding across modalities and hierarchies reveals how the brain represents and learns crossmodal knowledge in sequences.

Details

Title
Crossmodal hierarchical predictive coding for audiovisual sequences in the human brain
Author
Huang, Yiyuan Teresa 1   VIAFID ORCID Logo  ; Wu, Chien-Te 2   VIAFID ORCID Logo  ; Fang, Yi-Xin Miranda 3 ; Fu, Chin-Kun 3 ; Koike, Shinsuke 4 ; Chao, Zenas C. 5   VIAFID ORCID Logo 

 The University of Tokyo, International Research Center for Neurointelligence (WPI-IRCN), UTIAS, Tokyo, Japan (GRID:grid.26999.3d) (ISNI:0000 0001 2169 1048); The University of Tokyo, Department of Multidisciplinary Sciences, Graduate School of Arts and Sciences, Tokyo, Japan (GRID:grid.26999.3d) (ISNI:0000 0001 2169 1048) 
 The University of Tokyo, International Research Center for Neurointelligence (WPI-IRCN), UTIAS, Tokyo, Japan (GRID:grid.26999.3d) (ISNI:0000 0001 2169 1048); National Taiwan University, School of Occupational Therapy, College of Medicine, Taipei, Taiwan (GRID:grid.19188.39) (ISNI:0000 0004 0546 0241) 
 National Taiwan University, School of Occupational Therapy, College of Medicine, Taipei, Taiwan (GRID:grid.19188.39) (ISNI:0000 0004 0546 0241) 
 The University of Tokyo, International Research Center for Neurointelligence (WPI-IRCN), UTIAS, Tokyo, Japan (GRID:grid.26999.3d) (ISNI:0000 0001 2169 1048); The University of Tokyo, Department of Multidisciplinary Sciences, Graduate School of Arts and Sciences, Tokyo, Japan (GRID:grid.26999.3d) (ISNI:0000 0001 2169 1048); University of Tokyo Institute for Diversity & Adaptation of Human Mind (UTIDAHM), Tokyo, Japan (GRID:grid.26999.3d) (ISNI:0000 0001 2169 1048) 
 The University of Tokyo, International Research Center for Neurointelligence (WPI-IRCN), UTIAS, Tokyo, Japan (GRID:grid.26999.3d) (ISNI:0000 0001 2169 1048) 
Pages
965
Publication year
2024
Publication date
2024
Publisher
Nature Publishing Group
e-ISSN
23993642
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3091023287
Copyright
© The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.