Content area
Abstract
Current theories of speech perception in the human brain propose two separate, hierarchically organized cortical processing streams, a ventral and a dorsal stream. The ventral stream is generally believed to mediate acoustic decoding of the speech signal and, ultimately, link sounds to meanings. The function of the dorsal stream is less well understood, however, and has been a matter of some debate. It has recently been postulated that the dorsal stream may play a key role in sensorimotor integration linking speech sounds to motor articulations. To further investigate these theories we first examined the functional organization of human auditory cortex. Using functional magnetic resonance imaging (fMRI), robust hierarchical organization was identified in human auditory cortex. Whereas primary auditory cortex responded to all sounds tested, the surrounding regions responded only to sufficiently complex sounds; a surrounding belt region responded to both band-passed noise bursts and phonemes (but not pure tones), and a more distant parabelt region responded only to phonemes.
We further probed the neural representations of phonemes in the human brain using a novel fMRI rapid adaptation (fMRI-RA) paradigm. In fMRI-RA, two stimuli are presented in each trial, and the resulting BOLD-signal is thought to reflect the dissimilarity between neuronal activation patterns for the two stimuli. By pairing speech sounds of comparable acoustic dissimilarity from either the same or a different phonetic category we could dissociate neuronal selectivity for acoustic-features vs. phonetic categories. Our results support a model of speech processing in which a ventral stream represents sounds in an acoustic feature-based hierarchy and links them to task-relevant meanings, while a dorsal stream automatically links speech sounds to their motor-articulations via separate sensorimotor representations of speech sounds and articulatory phoneme categories.