It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Seeing the face of a speaker facilitates speech recognition in challenging listening environments. Prior work has shown that visual speech contains timing information to aid auditory speech processing, yet how these signals are integrated within the auditory system during audiovisual speech perception remains poorly understood. Observation of preparatory mouth movements may initiate phase reset of intrinsic oscillations, potentially sensitizing the auditory system for receptive speech processing, while observation of mouth movements post speech onset may facilitate entrainment to the speech envelope. Yet, little work has been done to test whether visual speech enhances encoding of auditory speech onset, speech envelope tracking, or both, and through independent or overlapping mechanisms. To investigate this, we examined the ways in which visual speech timing information alters theta band power and phase using human intracranial electroencephalography (iEEG) recordings in a large group of patients with epilepsy (n = 21). Prior to speech onset, preparatory mouth movements elicited theta phase reset (increased inter-trial phase coherence; ITPC) throughout the superior temporal gyrus (STG), which is thought to enhance speech onset encoding. Following speech onset, visual speech modulated theta ITPC only at anterior STG electrodes while theta power was modulated at posterior STG electrodes. Pre- and post-speech onset were spatially and temporally dissociated, consistent with the hypothesis that audiovisual speech onset encoding and envelope tracking mechanisms are partially distinct. Crucially, congruent and incongruent visual speech, designed here to have identical visual timing information about speech onset time, but different visual mouth evolution, produced only a small difference in the phase of theta band oscillations in the anterior STG, highlighting a more restricted role of visual speech in ongoing auditory entrainment. These results support the hypothesis that visual speech improves the precision of auditory speech encoding through two separate mechanisms, with auditory speech onset encoded throughout the entire STG and ongoing speech envelope tracking within anterior STG.
Competing Interest Statement
The authors have declared no competing interest.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer