Content area
Infants' auditory processing abilities have been shown to predict subsequent language development. In addition, poor auditory processing skills have been shown for some individuals with specific language impairment. Methods used in infant studies are not appropriate for use with young children, and neither are methods typically used to test auditory processing skills in specific language impairment (SLI). The objective in this study was to develop an appropriate way of testing auditory processing skills in children in the 4-5 year age range. We report data from 49 children aged 4-5 years (mean age 58.57 months) tested on five tasks with tones and synthesized syllables. Frequencies and inter-stimulus intervals were varied in the tone tasks; the second formant transitions between consonant and vowel were varied in the syllable tasks. Consistent with past research, variability was found in children's auditory processing abilities. Significant correlations in discrimination thresholds for the tasks were found. The results from two regression analyses showed that the children's auditory processing abilities predicted significant amounts of variance for receptive and expressive language. [PUBLICATION ABSTRACT]
Abstract
Infants' auditory processing abilities have been shown to predict subsequent language development. In addition, poor auditory processing skills have been shown for some individuals with specific language impairment. Methods used in infant studies are not appropriate for use with young children, and neither are methods typically used to test auditory processing skills in specific language impairment (SLI). The objective in this study was to develop an appropriate way of testing auditory processing skills in children in the 4-5 year age range. We report data from 49 children aged 4-5 years (mean age 58.57 months) tested on five tasks with tones and synthesized syllables. Frequencies and inter-stimulus intervals were varied in the tone tasks; the second formant transitions between consonant and vowel were varied in the syllable tasks. Consistent with past research, variability was found in children's auditory processing abilities. Significant correlations in discrimination thresholds for the tasks were found. The results from two regression analyses showed that the children's auditory processing abilities predicted significant amounts of variance for receptive and expressive language.
Key words
auditory processingchildrenlanguagesyllablestones
1 Introduction
In acquiring a spoken language, an infant must be able to process the changing speech sounds that appear in rapid succession in the input, characterized by temporal and spectral variation. Infants with efficient auditory processing systems readily detect the patterns in natural language and this enables them to move more quickly towards complex language structures (Kuhl, 2004). In contrast, deficits in the processing of auditory stimuli have been associated with developmental language delay (Tallal, 2000; Tallal & Piercy, 1973a, 1973b, 1975; Tallal, Stark, & Curtiss, 1976; Wright, 2006; Wright et al., 1997). Wright et al. (1997) argued that it is the inability to perceive brief and successive sounds of speech that results in language impairment, while Friederici (2004) argued that a deficiency in the processing of prosodic/phonological information could be the underlying problem leading to language impairment.
Evidence that efficient processing of auditory stimuli in infancy is related to subsequent language development comes from a number of behavioral and psychophysical studies.
Trehub and Henderson (1996) tested infants at 6 and 12 months of age on their discrimination of tones of 1000 and 4000 Hz with varying durations of silent intervals (inter-stimulus intervals or ISIs) separating the tone pairs. The infants were tested using a head-turn paradigm. Those children who performed above the median on the auditory processing tasks in infancy were reported, based on the MacArthur-Bates Communicative Development Inventory (CDI: Fenson et al., 1993), to have larger productive vocabularies at 2 years of age and to be producing longer and more complex sentences. In a study with 6 month olds, Tsao, Liu, and Kuhl (2004) found a relationship between the infants' speech discrimination skills and their language at 13, 16 and 24 months.
Benasich and colleagues (Benasich & Tallal, 2002; Benasich, Thomas, Choudhury, & Leppanen, 2002) also investigated the relationship between infant auditory processing abilities and subsequent language development. They included an infant-control habituation and recognition task (of tones or synthesized syllables) as well as a conditioned head-turn task to test discrimination of tone pairs. Since family history has been found to be a good predictor of language impairment (e.g., Choudhury & Benasich, 2003; Rice, 2000), half the participants were selected from families with a positive history of language impairment and half from families with no history of language impairment. At 7.5 months of age, the difference in auditory processing thresholds between the two groups was significant: the at-risk group were less able to detect changes in frequency or ISI than were the control group. In addition, a significant association was found between the thresholds at 7.5 months and the infants' language expression and comprehension at 16, 24 and 36 months, as measured by the CDI and the Preschool Language Scales-3 (PLS-3: Zimmerman, Steiner, & Pond, 1992). Regardless of family history, those children with a low threshold for discriminating the stimuli had the best language outcome: their auditory processing at 7.5 months (on syllables and tones) was the single best predictor of language outcome at 36 months.
Event Related Potentials (ERPs) have been used to monitor early language development. They reveal electrophysiological indices for aspects of language processing: the progression of language processing over time (Friederici, 2004). As argued by Friederici, a delay in using the available prosodic and phonological cues of the language could very well affect subsequent lexical and syntactic development. For example, German children delayed in word production at 24 months had, at 5 months of age, a reduced response to the language-specific stress pattern of words in their language (Weber, Hahne, Friedrich, & Friederici, 2004, 2005). A mismatch response to syllable length was also reported for infants from 2 months of age; infants with a familial risk for specific language impairment (SLI) showed a delayed mismatch response (Männel & Friederici, 2008).
Research with school-age children has focused mainly on the auditory processing of children with SLI or with dyslexia. Tallal and colleagues proposed that deficits in rapid auditory processing interfere with the perception of auditory stimuli that are characterized by rapid acoustic changes, leading to a delay in the formation of distinct phonological representation (Tallal, 2003). Children with SLI have been found to need a greater frequency difference in the vowel formants to identify syllables as different (McArthur & Bishop, 2004a). Other research has reported that individuals with SLI require a longer ISI to identify tones (e.g., Tallal & Piercy, 1973a). In the task typically used, children are trained to match two tones, presented in isolation, with specific keypress responses. Sequences of two tones are then presented and the child is required to press the corresponding keys in the correct sequence. Tallal and Piercy (1973a) found that children with SLI performed accurately when the tones were separated by an ISI of over 300 ms, but performance deteriorated with shorter ISIs. In another study, Tallal and Piercy (1973b) manipulated tone duration rather than ISI. For children with SLI, accuracy was found for longer tone duration even with short ISIs. Based on these and similar findings, Tallal and her colleagues have argued that the auditory processing deficits of children with SLI are temporally based.
As McArthur and Bishop (2001) have pointed out, however, there are two dimensions to the task: the frequency discrimination aspect and the temporal aspect. Poor performance on the auditory perception task may be due to deficiencies in spectral or temporal processing of auditory signals. Bishop and McArthur (2005) argue that a spectral difficulty would hinder the discrimination of sounds that differ in frequency, regardless of duration or presentation rate. They present two arguments in support: children have failed in some studies to learn to discriminate the tones (e.g., Bishop, Carlyon, Deeks, & Bishop, 1999; Heath, Hogben, & Clark, 1999), and if they do succeed in learning the initial discrimination, some children have problems distinguishing the tone sequences, whether presented fast or slow (Bishop et al., 1999; Waber et al., 2001). Thus, McAthur and Bishop (2004a) argue that poor performance on tasks requiring children to repeat a sequence of tones is the result of poor frequency discrimination and that rapid presentation makes the task more difficult.
There is a great deal of variability in auditory processing skills in adult and child populations, and in the SLI population not all individuals with SLI show poor auditory processing (McArthur & Bishop, 2004a, 2004b) leading Bishop et al. (1999) to suggest that poor auditory processing may be a risk factor, though non-causal for language impairment. A deficit in frequency discrimination could be one of a number of risk factors (McArthur & Bishop, 2005). An alternative explanation is that frequency discrimination improves with age (Hill, Hogben, & Bishop, 2005) and so while there may be poor frequency discrimination in younger SLI, there will be improvement as they grow older.
Infants and school-age children, typically from age 9 years, and adults have been the major focus of auditory processing research and the relationship between auditory processing and concurrent language abilities. Thompson, Cranford, and Hoyer (1999) tested children aged 5-11 years on tones differing in frequency and duration. However, the 5 year olds could not learn the task. In one method often used, respondents learn to associate two buttons with two different tones and after hearing a sequence of these tones they are required to repeat the sequence. Bernstein and Stark (1985) tested 4-8-year-old children with SLI on a tone repetition task, but the younger children had trouble completing the task.
The main objective of the current research was to develop a technique and materials using tones and syllables for testing auditory processing abilities of 4-5-year-old children; for this reason we were interested in whether children's discrimination thresholds correlated at different test times. Based on Tallal's (2003) finding that children who have difficulty discriminating tones also have difficulty discriminating speech sounds, we predicted that there would be significant correlations between children's performance on the tone and syllable tasks included in the study. Our second objective was to investigate whether the children's auditory processing skills were significantly associated with their concurrent language. Based on the assumption of a relationship between auditory processing abilities and language development, we predicted that the children's auditory processing skills would contribute a significant amount of variance for receptive and expressive language scores.
2 Method
2.1 Participants
Data are reported from 49 children aged between 51 and 67 months (mean age = 58.57 months, SD = 4.38). The sample included 26 males (mean age = 57.92 months, SD = 4.50 months) and 23 females (mean age = 59.30 months, SD = 4.22). Children were recruited from three preschools in Melbourne, Australia, and through advertisements in a freely distributed magazine on child issues. Six children were excluded because of existing auditory processing problems, auditory neuropathy problems or hearing deficits. Eight additional children participated in extensive pilot testing.
2.2 Materials
A task was included as a rough check of hearing. We used frequencies of 500 Hz, 1 kHz, 2 kHz, and 4 kHz. Each test frequency was presented from 60 dB hearing level (HL) to 20 dB HL, with successful performance at 20 dB HL considered to be within the normal hearing range. We started with 60 dB so that children would become familiar with the task with easy trials. All children completed the task and were able to hear the sounds at 20 dB.
2.2.1 Tone tasks
Complex tones were used at frequencies in the 100-300 Hz range because these are similar to fundamental frequencies of speakers' voices. The tone stimuli used for the tone tasks were the same as used by Benasich and Tallal (2002), except that for one task we changed the fundamental frequency of the "different" tone instead of altering the ISI and in the other task we used a smaller frequency difference and varied the ISI. Some details of the tone tasks are presented in Table 1.
2.2.1.1 CT-Pitch
The first task (CT-Pitch), an adaptation of an auditory task used by Benasich and Tallal (2002), was a frequency discrimination task. The synthesized tones were 70 ms in duration, and had 20 ms rise and fall times, 15 harmonics, and 6 dB roll off per octave to represent vocalic sounds. Two pairs of tones were used: a 100 Hz-100 Hz ("same") complex tone pair or a 100 Hz and higher ("different") complex tone pair. The higher frequency started at 200 Hz but varied throughout the task. The maximum frequency difference for the different pair was 200 Hz and the minimum difference was 2.5 Hz. The ISI remained constant at 500 ms; that is, there was a fixed gap between the complex tones in each pair. The CT-Pitch task is illustrated in Figure 1, which shows the complex tone pair at the starting point (100 Hz and 200 Hz) and the spectra of each of these tones (spectra do not include the rise and fall times).
A one-up one-down adaptive two alternative forced choice procedure was used, modified from the transformed up-down procedure described by Levitt (1970). That is, every time a child answered a "different" item correctly, the difference between the stimuli was narrowed. No change was made if a "same" item was answered correctly.
If a child answered any item incorrectly, the difference between the stimuli was increased with 25 Hz steps for two reversals, followed by 10 Hz steps for two reversals, and then 2.5 Hz steps for six reversals, with a minimum of 2.5 Hz frequency difference.
A probe check was used for all of the auditory processing tasks. After two consecutive misses at any time in the procedure, a probe was inserted in order to check for loss of concentration. The initial contrasts were used as the probes. Failure to respond correctly to two consecutive probe stimuli terminated the task; if one probe was answered correctly the test resumed as normal.
2.2.1.2 CT-ISI
A second tone task was included (CT-ISI).1 As in the CT-Pitch task, the tones had an amplitude of 70 dB and had a 20 ms rise and fall with 15 harmonics and 6 dB roll-off per octave. The "same" tone pair was 100 Hz-100 Hz and the "different" pair was 100 Hz -150 Hz. That is, the frequency difference was always 50 Hz in the different pair. The order of the low and high tones of the "different" tone pair varied randomly between trials: sometimes the high tone came first and sometimes second. The ISI started at 500 ms and changed by 200 ms steps for two reversals and then in steps of 20 ms for six reversals, with a minimum of 20 ms.
2.2.2 Syllable Tasks
2.2.2.1 Syll-FOF
In the first of the two syllable tasks (Syll-FOF), a /ba/ syllable was randomly paired with an identical syllable ("same") or a syllable with a higher second formant onset ("different"), nominally /da/. A difference between /ba/ and /da/ is found in the transition of the second formant: for /ba/ it starts low and rises to the steady state; for /da/ it starts higher and falls. In this task, the consonant-vowel transition for the second syllable in a different pair was progressively altered by modifying the second formant frequency so that the difference between the two syllables was harder to detect. (The task is summarized in Tables 1 and 2.)
Frequencies were 1075 Hz for the second formant onset for /ba/ and 1540 Hz for the second formant onset for /da/, with the /da/ syllable adaptively altered over the task duration. The difference was changed by 60 Hz steps for two reversals, followed by 30 Hz steps for two reversals, and then 15 Hz steps for the final six reversals. The minimum distance between the two syllables was 15 Hz. The Klatt synthesizer (Klatt, 1980) was used to generate the stimuli using the cascade branch. The implementation of the Klatt synthesizer that was used was "A Klatt-style Speech Synthesizer Implemented in C," version 3.0.4 (Iles & Ing-Simmons, 1994). The stimuli parameters are provided in Table 1 and representative stimuli are illustrated in Figure 2A. The formant frequencies were taken from Tallal, Stark, Kallman, and Mellitis (1981); they were clearly perceived by the experimenters to be in the /ba/-/da/ continuum.
2.2.2.2 Syll-FTD
In the second syllable task, syllable transition (Syll-FTD), the stimuli were developed using synthesized /ba/ and /da/ syllables to have a transition time difference between the consonant and vowel (see Tables 1 and 2). The task started with a 43 ms transition duration, but this time was progressively altered so that it became harder to detect if the pairs of syllables were the same or different. The initial transition period for the second formant of /da/ was 43 ms; this was changed by 10 ms steps for two reversals, 5 ms steps for two reversals, and then by 1 ms steps for the final six reversals. The minimum possible threshold was 1 ms. The same pair appeared first in some trials and the different pair appearing first in others. Each syllable was played at 72 dB sound pressure level (SPL) with duration 250 ms. The order of presentation was randomized. Representative stimuli are illustrated in Figure 2B.
The actual Klatt parameters used are given in Table 3. The parameters were specified in 1 ms frames. "Start" refers to the first frame, "Onset" refers to the first frame of the syllable (at 50 ms), and "End" refers to the final frame (which varies because of different syllable durations in the Syll-FTD case).2
2.2.3 Language measure
The Clinical Evaluation of Language Fundamentals-Preschool (CELF-P; Wiig, Secord, & Semel, 1992) was used to obtain an expressive and a receptive language score.
2.3 Procedure
Testing took place over two sessions. The children were randomly assigned to one of two orders and either completed the tone tasks in session 1 or in session 2. The testing schedule for the first session was: the hearing task, one tone or syllable task, the CELF-P, and a second auditory processing task (the second tone task if they had completed one tone task or the second syllable task if they had completed one syllable task). In the second session, approximately one week later, the other auditory processing tasks were included (tone or syllable tasks, depending on which had been completed in session 1). Each of the auditory perception tasks lasted approximately five minutes and consisted of approximately 40 trials.
The stimuli for all tasks were administered through calibrated headphones at 60 dB HL down to 20 dB HL. The Pro 2 Dynamic headphones, used for this and all other auditory processing tasks, were calibrated using a Bruel and Kjaer sound level meter. The stimuli were presented from a laptop via a Creative Labs USB Sound Blaster (model: SB3000) external sound box to ensure consistency. For the hearing task the children were requested to tap on the table with a pencil only if they heard a sound. The right ear was tested first and then the left ear at each frequency and sound level. For the other tasks, the children heard a pair of sounds in both ears through headphones and were asked to say if the sounds they heard were the same or different. The experimenter immediately entered the response onto a computer and then presented the next pair of sounds to the child.
Familiarization trials preceded all auditory tasks and involved a minimum of eight trials with "same" and "different" tone pairs included to ensure the children knew the requirements of the tasks. All children were able to complete these trials. The average of the last four reversals was used as the child's threshold score, excluding any probe stimuli, as these occurred after the final minimum step size was reached. A lower threshold score indicates better auditory processing. To check for task reliability we repeated the CT-Pitch task and the syllable tasks immediately after the first presentation.
3 Results
3.1 Test-retest
Possible learning effects were examined with three matched-pair t-tests to compare the children's thresholds at time 1 and time 2 on the three auditory processing tasks that were repeated. There was a significant high correlation for each, with all p values< .001: CT-Pitch, r = .68; Syll-FOF, r = .75 and Syll-FTD, r = .65. No significant testing time differences were found (CT-Pitch, t(48) = 1.90, p = .063; Syll-FOF, t(48) = .354, p = .725; Syll-FTD, t(48) = 1.39, p = .171). The significant high correlations indicate that the tasks were reliable measures of auditory processing skills for 4-5-year-olds. The results from the first testing were used in all analyses. Lower thresholds represent better performance.
3.2 Descriptive data
The mean scores for the language and auditory processing tasks are presented in Table 4, together with the SDs. The range of thresholds on the auditory processing tasks indicates diversity among participants. For CT-Pitch the range was 2.50 to 193.75, for CT-ISI 20-500, for Syll-FOF 82.5-450 and for Syll-FTF 1-41. The variables were normally distributed.
Preliminary analysis on the auditory processing scores showed no gender effects and thus gender was not considered in the analyses. In addition, there were no differences that could be attributed to test order, as there were no differences found in the performance of children tested with tones or with syllables in their first session.
3.3 Testing the predictions
To test our predictions that there would be significant associations between the tone and syllable tasks and expressive language and between the tone and syllable tasks and receptive language, we examined the Pearson correlation coefficients for the four auditory processing tasks and the two language measures. The coefficients are given in Table 5, with the p values in parenthesis.
As can be seen, there were significant moderate correlations between Syll-FOF and the other three auditory processing tasks. However, while they tap related skills, they are different. Significant correlations were also found between Syll-FOF and receptive and expressive language, and both tone tasks were significantly correlated with receptive language. For expressive language, CT-Pitch also correlated significantly.
A linear regression with receptive language as the outcome measure and Syll-FOF and the two tone tasks as predictors revealed 19.7% of the variance was contributed by these three tasks: R-square change = .197, F(3, 45) = 3.67, p = .019. The variance was shared since none of the partial correlations were significant (CT-Pitch, t = 1.26, p = .215; CT-ISI, t = 1.05, p = .301; Syll-FOF, t = 1.83, p = .074).
A second linear regression analysis was conducted with expressive language as the outcome measure and Syll-FOF and CT-Pitch as predictors. The two predictors contributed 19.0% of the variance: R-square change = .190, F(2, 46) = 5.38, p = .008. Of this variance, CT-Pitch uniquely contributed 12.4%, t = 2.66, p = .011, partial r = .353, although Syll-FOF contributed no unique variance, t = 1.01, p = .318.
4 Discussion
A main objective was to develop a method and stimuli for testing children in the 4-5 year age range. The significant correlations between the test results for the two test periods and the lack of any significant differences between thresholds at time 1 and time 2 suggest we have a reliable tool that can be used in future large-scale studies.
Our first prediction, that performance on the syllable tasks would be associated with performance on the tone tasks, was supported. There were significant correlations with the Syll-FOF and the other three auditory processing tasks. In all of these tasks individual thresholds varied, but the children who had higher thresholds when distinguishing syllables with different transition frequencies for the second formant of the vowel also had higher thresholds when distinguishing syllables with short formant transitions, and higher thresholds when distinguishing tones with frequencies of 100 Hz and 150 Hz with varying ISIs. This finding supports Tallal's (2003) finding, that children who have difficulty discriminating tones that differ in frequency also have difficulty discrimination speech sounds.
Although the Syll-FTD task was significantly correlated with the Syll-FOF task, it was not significantly correlated with the two tone tasks. The changes made in this task were in the transition time between C and V within the syllable. Thus the "temporal" aspect of the manipulation in this task can not be linked to manipulations made with ISIs. Transitions between C and V in speech vary in different contexts (different speakers and different rates of talking) and children need to learn to accept some variation in the linguistic input. Given the results obtained in the current study, it is probably not a task that we would include in the battery of tests in future research.
In the current study we did not examine the predictive nature of 3-5 year olds' auditory processing skills for language development. However, given some research findings that children with poor language (SLI) have poor auditory processing abilities, we predicted a concurrent relationship between auditory processing thresholds and language scores for children in this age range. This prediction was supported. Children who had better expressive language were better at distinguishing frequencies in the CT-Pitch task and also /da/ syllables with varying transitions for the second formant. That is, differences across children in expressive language scores were related to their frequency discrimination in non-speech stimuli as well as their detection of changing sounds in syllables. Frequency discrimination skills, which underlie performance on auditory perception tasks and so provide a link to language development, are necessary for the identification of phonemes and the patterns of occurrence of these phonemes in a language. Poor frequency discrimination can be considered as a risk factor for SLI (McArthur & Bishop, 2004a). Thus it is perhaps not surprising that in the current study good frequency discrimination skills were associated with superior syllable detection.
The study also showed that children who had better receptive language scores performed better on both tone tasks and the SS-FOF task. These findings suggest that both frequency discrimination and temporal information may facilitate language. Efficient auditory processing systems are needed to discriminate frequencies, but the dynamic nature of language adds complexity; in acquiring a language a child must be able to distinguish changes in the dynamic language input in order to map meaning to form and so develop a language system. Receptive vocabulary develops before expressive vocabulary (Fenson et al., 1993). It represents a child's achievements in mapping sound sequences to meaning; representations of linguistic forms are essential for the development of expressive language. It is the children with both receptive and expressive language delay that are at most risk for persistent language problems. Given the results from the current study, it is possible that different findings in previous studies that have tested SLI groups could be related to whether the participants have been identified as impaired on the basis of weak expressive or weak receptive language skills. That is, differences in findings across studies may be due to different criteria for identifying SLI, as well as to the specific auditory perception tasks used.
4.1 Value of the study
The study also showed that children who had better receptive language scores performed better on both tone tasks and the SS-FOF task. These findings suggest that both frequency discrimination and temporal information may facilitate language. Efficient auditory processing systems are needed to discriminate frequencies, but the dynamic nature of language adds complexity; in acquiring a language a child must be able to distinguish changes in the dynamic language input in order to map meaning to form and so develop a language system. Receptive vocabulary develops before expressive vocabulary (Fenson et al., 1993). It represents a child's achievements in mapping sound sequences to meaning; representations of linguistic forms are essential for the development of expressive language. It is the children with both receptive and expressive language delay that are at most risk for persistent language problems. Given the results from the current study, it is possible that different findings in previous studies that have tested SLI groups could be related to whether the participants have been identified as impaired on the basis of weak expressive or weak receptive language skills. That is, differences in findings across studies may be due to different criteria for identifying SLI, as well as to the specific auditory perception tasks used.
4.1 Value of the study
In the adaptive procedure we reduced the difference between the two stimuli after only one correct response in order to reduce the number of necessary trials. We only scored correctly perceived "different" trials as being correct. However, if we had used two-in-a-row correct responses, a higher level of performance would have been obtained, but the average number of presentations before making the task more difficult would have been over 3.3. We also considered a method other than the adaptive staircase procedure: the use of a fixed number of trials. However, the use of the adaptive staircase procedure enabled the trials to start easy and become harder, which was beneficial for young children by reducing the number of necessary trials.
A large percentage of the children tested reported that the syllable tasks were harder than the tone tasks, and the Syll-FTD task was generally determined to be the hardest of the two syllable tasks. Even though adults are able to perceive speech even if some spectral cues are absent, once language develops syllables are processed as part of words (McArthur & Bishop, 2004b). However, at the age of 4-5 years it is likely to be harder for children to make a judgment about modified syllables than adults, even though they are experienced with variations in sounds within and across words and speakers. They need to separate the syllables from word knowledge to make the judgement; this is a metalinguistic task, and metalinguistic abilities are still developing in 4-5-year-old children. Tones convey the complex frequency and temporal characteristics of speech sounds, but in English they do not convey semantic differences. However, there is a possible semantic interpretation with syllables such as /ba/ or /pa/, although they have no meaning when used in auditory processing tasks.
The current study supports previous research reporting variability in auditory processing abilities. The studies cited in the introduction focused on studies that show that later language scores can be predicted on the basis of infants' performance on auditory processing tasks. The SLI-based research is related to auditory processing and concurrent language. Our study only investigated concurrent associations between auditory processing abilities and language performance. It is unique in testing 4-5- year-olds with both tone and syllable tasks.
The findings from the study suggest that children with high auditory processing thresholds at age 4-5 years are poorer at language. Auditory perception skills contributed a high percentage of variance to both receptive and expressive language. Frequency discrimination uniquely contributed more than 12% of the variance for expressive language. For receptive language, the temporal aspect (tone task CT-ISI), in addition to frequency discrimination for tone and syllable, was important. Together these three predictors contributed a higher proportion of variance (19.7%) than did frequency discrimination for expressive language. That is, the ISI task was linked to receptive language only.
This finding provides a link with Tallal's early research with children with receptive language impairment. As stated above, receptive language impairment is considered to be more severe than expressive language impairment. It is less likely that children with receptive language impairment will overcome their impairment than will children with expressive language impairment. Bishop and McArthur (2005) proposed that poor frequency discrimination is the underlying problem for individuals with SLI who show poor performance on auditory processing tasks, but for children most at risk, frequency discrimination can be made more difficult with rapid presentation. In the current study we did not test a clinical group, but it is clear that difficulty with rapid presentation has some impact on language, although not as much as difficulty with frequency discrimination.
Future studies using our stimuli might compare a group of children with language impairment and a group of non-impaired children to determine how different their auditory processing is - and if there is as much variability in the SLI groups as in a typical language group. Alternatively a larger group with an even more diverse range of language skills than in the current study could be tested.
Patterns of language impairment change over time (Bishop & Edmondson, 1987; Conti-Ramsden & Botting, 1999), and, as discussed by Hill et al. (2005), frequency discrimination thresholds improve for many children. Longitudinal research with preschool age children would provide insights into the natural course of development of auditory processing skills for non-speech and speech stimuli. Research investigating how auditory processing thresholds change over time would also provide a basis for determining whether children whose language impairment resolves show different developmental patterns of auditory processing skills.
References
BENASICH, A. A., & TALLAL, P. (2002). Infant discrimination of rapid auditory cues predicts language impairment. Behavioral Brain Research, 136, 31-49.
BENASICH, A. A., THOMAS, J. J., CHOUDHURY, N., & LEPPANEN, P. H. T. (2002). The importance of rapid auditory processing abilities to early language development: Evidence from converging methodologies. Developmental Psychobiology, 40, 278-292.
BERNSTEIN, L. K., & STARK, R. E. (1985). Speech perception development in language-impaired children: A 4-year follow-up study. Journal of Speech and Hearing Disorders, 50, 21-30.
BISHOP, D. V. M., CARLYON, R. P., DEEKS, J. M., & BISHOP, S. J. (1999). Auditory temporal processing impairment: Neither necessary nor sufficient for causing language impairment in children. Journal of Speech, Language, and Hearing Research, 42, 1295-1310.
BISHOP, D. V. M., & EDMONDSON, A. (1987). Specific language impairment as a maturational lag: Evidence from longitudinal data on language and motor development. Developmental Medicine and Child Neurology, 29, 442-459.
BISHOP, D. V. M., & McARTHUR, G. M. (2005). Individual differences in auditory processing in specific language impairment: A follow-up study using event-related potentials and behavioural thresholds. Cortex, 41, 327-341.
CHOUDHURY, N., & BENASICH, A. A. (2003). A family aggregation study: The influence of family history and other risk factors on language development. Journal of Speech, Language and Hearing Research, 46, 261-272.
CONTI-RAMSDEN, G., & BOTTING, N. (1999). Classification of children with specific language impairment: Longitudinal considerations. Journal of Speech, Language, and Hearing Research, 42, 1195-1204.
FENSON, L., DALE, P., REZNICK, J. S., THAL, D., BATES, E., HARTUNG, J. P., et al. (1993). The MacArthur communicative development inventories. San Diego, CA: Singular Press.
FRIEDERICI, A. D. (2004). Event-related brain potential studies in language. Current Neurology and Neuroscience Reports, 4, 466-470.
HEATH, S. M., HOGBEN, J. H., & CLARK, C. D. (1999). Auditory temporal processing in disabled readers with and without oral language delay. Journal of Child Psychology and Psychiatry, 40, 637-647.
HILL, P. R., HOGBEN, J. H., & BISHOP, D. V. M. (2005). Auditory frequency discrimination in children with specific language impairment: A longitudinal study. Journal of Speech, Language and Hearing Research, 48, 1136-1146.
ILES, J., & ING-SIMMONS, N. (1994). Klatt: A Klatt-style speech synthesizer implemented in C (Version 3.0.4) [computer software]. CMU Artificial Intelligence Repository. Retrieved 5 May 2008, from http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/speech/systems/klatt/0.html
KLATT, D. H. (1980). Software for a cascade/parallel formant synthesizer. Journal of the Acoustical Society of America, 67, 971-995
KUHL, P. K. (2004). Early language acquisition: Cracking the speech code. Nature Reviews: Neuroscience, 5, 831-843.
LEVITT, H. (1970). Transformed up-down methods in psychoacoustics. The Journal of the Acoustical Society of America, 61, 1337-1351.
Edith L. Bavin1, David B. Grayden2,3,Kim Scott1, Toni Stefanakis1
1 School of Psychological Science, La Trobe University, Australia
2 Department of Electrical & Electronic Engineering, The
University of Melbourne, Australia
3 The Bionic Ear Institute, East Melbourne, Australia
Address for correspondence. Edith L. Bavin, School of Psychological Science, La Trobe University, Victoria 3083, Australia; e-mail: <[email protected]>
Copyright Sage Publications Ltd. Mar 2010