Content area
Full text
Purpose: Studies on reading in individuals with severeto-profound hearing loss (deaf) raise the possibility that, due to deficient phonological coding, deaf individuals may rely more on orthographic-semantic links than on orthographic-phonological links. However, the relative contribution of phonological and semantic information to visual word recognition in deaf individuals was not directly assessed in these studies. The aim of the present study, therefore, was to examine the interplay between orthographic, phonological, and semantic representations during visual word recognition, in deaf versus hearing adults.
Method: Deaf and hearing participants were asked to perform a visual lexical decision task in Hebrew. The critical stimuli consisted of three types of Hebrew words, which differ in terms of their relationship between orthography,
phonology, and semantics: unambiguous words, homonyms, and homographs.
Results: In the hearing group, phonological effects were more pronounced than semantic effects: Homographs (multiple pronunciations) were recognized significantly slower than homonyms or unambiguous words (one pronunciation). However, there was no significant difference between homonyms (multiple meanings) and unambiguous words (one meaning). In contrast, in the deaf group, there was no significant difference among the three word types, indicating that visual word recognition, in these participants, is driven primarily by orthography.
Conclusion: While visual word recognition in hearing readers is accomplished mainly via orthographic-phonological connections, deaf readers rely mainly on orthographicsemantic connections.
Interactive "triangle" models of word recognition (e.g., Grainger & Ferrand, 1994, 1996; Seidenberg & McClelland, 1989) assume a reading mechanism in which orthographic, phonological, and semantic representations are fully interconnected (i.e., are bidirectionally connected to each other). With practice, these bidirectional mappings become automatic such that orthographic representations automatically activate their corresponding phonological and semantic representations, and these in turn influence the recognition process via feedback connections. Interactive models further assume that visual word recognition is influenced by the consistency of the mappings between orthographic, phonological, and semantic codes. Cross-code consistency is maximal when there is a one-to-one relation between different codes. Inconsistencies occur when a single orthographic representation is associated with multiple phonological or with multiple semantic representations, or vice versa. Thus, in a fully interconnected network, consistent symmetrical relations result in stable and fast activation, whereas inconsistent asymmetrical relations should slow down word recognition (Grainger & Zeigler, 2008; Peleg et al., 2010).
Consistent with this...




