1. Introduction
The theory of grounded cognition proposes that cognition is dependent on the brain’s modal systems for perception, action and introspection [1]. This theory postulates that the sensory and motor brain areas are activated not only during perception or action, but also by cognitive processes such as understanding words related to these modalities. Some studies show that this is true, for example, for the motor area: reading hand- and foot-related action words activate areas belonging to the motor cortex and responsible for hand and foot movements, respectively [2,3,4,5,6,7,8]. Analogously, words implying acoustic features were shown to activate, beyond other areas, part of the same temporal brain area also recruited during sound perception [9]. Behavioural findings showed that reading auditory-related verbs improved the detection of subsequent hardly audible sounds in participants with high lexical decision performance [10]. So far, there is a lack of research about such cognitive simulation processes involving the auditory system during word processing and even less studies focussed on neural oscillations in this context. The power of brain oscillations can be used as an index of neural activation level. While synchronized beta oscillations (12–25 Hz) have been proposed to maintain the current cognitive or sensorimotor state, desynchronized beta oscillations have been interpreted also as local cortical activation, for example, related to movements or to auditory processing [11]. Synchronization of the alpha frequency (8–12 Hz) is viewed as an idle state of the brain [12] while, e.g., alpha (8–12 Hz) desynchronization in the auditory cortex has been shown to accompany auditory stimulation [13]. Within the framework of the grounded cognition theory, it was found that visually presented words describing loud actions induced stronger beta frequency desynchronization in the left auditory cortex compared to words describing quiet actions [14].
Onomatopoetic words are especially interesting in this context as they tend to acoustically reproduce the sound (and sometimes the shape or even other semantic qualities) of the object or action they refer to [15,16]. In earlier studies, onomatopoetic words were shown to be accompanied by stronger activation in those areas that are usually activated by the related real-sound stimuli: for example, animal sound-related onomatopoetic words (e.g., the Japanese word “wanwan” indicating the dog’s barking) activated areas responsible for the perception of non-verbal sounds [17,18,19,20,21,22]. However, these studies exclusively focussed on interjections, that is, words that only imitate a sound (e.g., “kikeriki” for a rooster call); these, however, are neither verbs, nor nouns, nor adjectives. Profiting from the strong onomatopoetic quality of interjections, most studies so far compared these to other non-onomatopoetic word classes to determine the effect of onomatopoeias on brain and behaviour [15,17,18,19,20,22,23,24,25,26]. Auditorily presented onomatopoetic interjections were shown to activate the auditory cortex and, specifically, the bilateral middle and anterior superior temporal sulcus (STS) more strongly than non-onomatopoetic nouns with the same reading frequency, auditory familiarity and auditory imageability [22]. Similarly, activation of the right posterior superior temporal sulcus (pSTS) following onomatopoetic word presentation was also found in another study [24]. Whereas these studies hint at a peculiar effect of onomatopoetic words, the comparison of interjections with non-onomatopoetic words belonging to different grammatical classes is problematic. Since the grammatical class of the word stimuli influences the localization and strength of brain activation as well [23,27], comparing interjections with verbs might result in effects going beyond onomatopoeias.
Few electroencephalography (EEG) studies applied onomatopoetic words instead of interjections; auditorily presented onomatopoetic adverbs (e.g., the Japanese “gatagata” for “rattling”) were found to elicit a larger late-positive sustained complex at about 400–800 ms than control adverbs, thus reflecting increased post-lexical processing [23]. In another study, processing visually presented onomatopoetic verbs resulted in a less negative-going N400 component and late-positive deflection compared to non-onomatopoetic control verbs [28]. The authors interpreted their findings as onomatopoeias being easier to process. However, results from an additional behavioural task in Peeters’ study showed that participants were not faster in differentiating onomatopoetic verbs from non-words than differentiating non-onomatopoetic verbs from non-words. This behavioural finding thus does not support the notion of the easier processing of onomatopoeias. Altogether, the literature is scarce and inconsistent, to some extent.
In the current MEG study, we aimed at determining the oscillatory as well as evoked neurophysiological activation related to onomatopoeias by comparing German onomatopoetic verbs (e.g., “brummen”—to hum) to non-onomatopoetic verbs matched for frequency, length and implied loudness. The latter was meant at controlling for a dimension of acoustic relevance. We focussed on the temporal cortical areas, because of their role in auditory processing and on the base of the literature on onomatopoeias [22,24]. For the aim of the current analyses, we selected the MEG channels resulting from a previous auditory localizer paradigm from our work group [14]. Here, onomatopoetic verbs were expected to induce larger alpha and beta frequency desynchronization in comparison to non-onomatopoetic verbs as a consequence of the increased engagement of the auditory cortex. Regarding evoked fields, we expected an overall facilitated linguistic processing of onomatopoetic verbs to reflect onto a lower amplitude than non-onomatopoetic verbs [28].
2. Materials and Methods
2.1. Participants
Twenty (10 females, 10 males, average age = 28.9 ± 6.9) right-handed (laterality Quotient = 94.2 ± 9.6 [29]), monolingual, German native speakers with no formal training in linguistics participated in the MEG study. Subjects had normal or corrected to normal vision, had no neurological or psychiatric disorder and were not using psychotropic medications. Left-handed people were excluded, as right- and left-handed participants show different cortical language dominance [30]. Linguists were excluded to avoid focussing on specific linguistic aspects of the presented words and an implicit advantage compared to non-linguists. Non-native speakers were not included in the study because different brain language areas have been found to be activated by foreign versus native [31]. Even if onomatopoetic foreign words may be intuitively easier to understand for non-native speakers than non-onomatopoetic ones [32], the related cortical activation might still be qualitatively different from that of native speakers. Participants were kept unaware of the purpose of the study to prevent interference with cognitive processes. After the completion of the experiment, participants were asked to guess the study purpose, and they were debriefed.
2.2. Stimuli
An initial list of 136 German verbs describing actions related to sounds was created, and they were initially pre-grouped in onomatopoetic and not onomatopoetic words. These verbs were then evaluated by means of an online questionnaire (
During the MEG measurement, the following task and trial design was applied (Figure 1): a grey fixation point was presented for 1 s, followed by a white fixation point lasting 1 s and indicating the upcoming verb. The word then appeared for 1 s, followed again by a fixation point lasting 500 to 750 ms with a jittered interval in steps of 50 ms; a jitter was used to prevent response automatization. The prompt displayed one out of three possible symbols representing a glass of water, a mouth and an electric outlet with a plug (Figure 1). In order to induce the semantic processing of word stimuli and to keep the participant unaware of the study conditions and purpose, each symbol was associated with one of the following questions, respectively:
Has the process implied by the verb anything to do with liquids?
Is the process implied by the verb performed with the mouth?
Is the process implied by the verb performed with an electric tool?
The prompt was presented either on the right or on the left side of the screen. The participants were required to respond “yes” to the prompt by lifting the index finger of the hand positioned on the same side as the presented symbol and “no” by lifting the index finger of the opposite hand. Left- and right-hand responses were balanced pseudo-randomly in order to trigger 50% right- and 50% left-hand responses. To reduce eye movement-related artefacts, participants were asked to avoid blinking until the end of the trial, when an eye symbol lasting 2 s indicated to blink. All 68 verbs were presented 3 times across 3 blocks. Each word was always followed by one of the questions above (Table S1). Blocks were separated by pauses as long as needed by the participant. Words were presented in a randomized order within each block. The measurement lasted about 35 min, depending on participants’ reaction and pause time.
2.3. Procedures
After signing informed consent and data privacy forms, participants filled out the Edinburgh Handedness Inventory [29]. They were asked to remove metal belongings, and if needed, were offered metal-free cotton clothes as well as individually calibrated metal-free glasses with corrective lenses. For electrooculography (EOG), four electrodes were placed around the eyes: one above and one under the left eye for vertical EOG and two at about 1 cm from the left and the right eye for horizontal EOG. These bipolar electrodes were used to detect eye movements and blinks. Four coils were placed on the forehead and behind the ears. The positions of the coils were digitized (Polhemus Isotrak) for later estimation of the head position during MEG measurements. During the MEG measurement, the participants were seated comfortably with their hands resting on two pads and their index fingers on two photoelectric switches. Instructions and word stimuli were projected onto a screen in front of the participant. After three demonstration trials, participants performed three practice trials that could be repeated, if needed, before starting the measurement.
2.4. Data Acquisition and Analysis
Neuromagnetic brain activity was recorded with a 306-channel MEG system (Elekta Neuromag, Helsinki, Finland). The channels consisted of 102 magnetometers and 204 orthogonal planar gradiometers. MEG data were digitized at 1000 Hz, bandpass filtered from 0.03 to 330 Hz online and stored on a computer hard disk.
MEG data were analysed with Matlab R2017b and fieldtrip toolbox [34]. Behavioural data analysis was run with R version 3.5.2 [35].
2.5. Meg Data Pre-Processing
Epochs were cut from the continuous data and included the time window between 1 s before word onset and 1 s after word onset. Only correct trials entered the analysis. Trials with answers at wrong time points or double answers were excluded from analyses. Semiautomatic jump and muscle artifact rejection was applied to the selected epochs. A notch filter was used to filter out the frequencies 49–51, 99–101 and 149–151 Hz. A high-pass filter of 2 Hz and a padding of 5 s were used as well. Heart and eye-related artifacts were removed via independent component analysis [36]: this resulted in the elimination of, on average, 2.6 components per subject. Noisy or faulty channels were repaired by interpolating data from neighbouring channels. An average of 6 surrounding gradiometers of the same type were used for each faulty channel. Trials were visually inspected for residual artifacts and then assigned to the two conditions.
2.6. Time–Frequency Representations and Event-Related Field Analysis
Time–frequency representations were calculated by using a fast Fourier transformation. An adaptive sliding time window including 5 cycles was shifted in steps of 50 ms from −1 s to 1 s after word onset. Data were padded up to 5 s. A single Hanning taper was applied, and power was estimated in steps of 1 Hz between 2 and 40 Hz. The time–frequency analysis was performed separately for horizontal and vertical planar gradiometers, and the pairs of planar gradiometers were combined afterwards. The time from 600 ms before word onset to 100 ms before word onset served as a baseline.
For the computation of ERFs, data were filtered with a low pass filter of 30 Hz. For each subject episodes from −1 s to 1 s after word onset were averaged; the time interval from −200 ms to word onset (=0 ms) served as the baseline. Horizontal and vertical planar gradiometers were combined.
2.7. Statistics
Difference in reaction time between word conditions and question types were tested with an ANOVA.
Considering the multidimensionality of MEG data, for the frequency and ERFs analysis, a procedure that effectively corrects for multiple comparisons, a non-parametric randomisation test, was used [37]. With regard to frequency analysis, the contrast between onomatopoetic and non-onomatopoetic words was run in the alpha and beta range (8–25 Hz), across the time window between 0 and 1 s after word onset (no average over time) and on the average of the activity of 9 left hemispheric temporal channels (Figure S1) that were selected on the base of results of a previous MEG localizer study targeting the auditory cortex [14]. A one-sided t-test for dependent samples was used. T-values of the time–frequency samples passing the significance threshold (p < 0.05) were selected and clustered with adjacent time and frequency bins. A cluster-level statistic was then calculated by taking the sum of the t-values of the samples within every cluster. A non-parametric permutation test, which consisted in computing 1000 random sets of permutations between the two conditions, was used to obtain a distribution of the cluster statistic; the significance level was set to p < 0.05.
The same procedure was applied to the statistical analysis of ERFs for the contrast between the onomatopoetic and non-onomatopoetic verb condition. The analysis included all channels. Considering the evidence for early semantic processes [38,39,40,41], we targeted the time window between 100 and 300 ms after word onset to detect semantically related components. Group differences in ERFs amplitude were also tested with a one-sided t-test, as onomatopoetic verbs were expected to elicit larger amplitudes.
3. Results
3.1. Behavioural Results
The reaction time for onomatopoetic verbs (on average, 741 ms ± 266 ms) was significantly shorter than for non-onomatopoetic words (on average 748 ms ± 326 ms; (p < 0.001)). The type of question did not have a significant effect on reaction times (p = 0.465). Missing responses were, on average, 0.3% per subject.
Incorrect responses occurred in an average of 6.4% of trials per subject. No participant thus exceeded the 15% error cut-off, at which the participant’s data would have been discarded: this suggests that the task was not too difficult for the participants. As no participant was able to correctly guess the purpose of the study, correct trials of all subjects entered the analyses.
3.2. Time–Frequency Representations
A statistical analysis of alpha and beta power on the nine selected channels yielded no significant result; no negative cluster emerged. However, on a descriptive level, differences in alpha and beta power emerged mainly in the left temporal channel selection (Figure 2). Here, a desynchronization in both frequency ranges was visible starting at about 200 ms after word onset, both in the onomatopoetic and the non-onomatopoetic verb condition (Figure 2a,b). The onomatopoetic condition showed a slightly increased alpha desynchronization, between 400 and 600 ms, and beta desynchronization between 0 and 200 ms as well as at about 700 ms after stimulus onset (Figure 2c). A descriptively stronger synchronization in the alpha range between 200–400 ms and in the beta range around 400–500 ms was also visible.
3.3. Event-Related Fields
ERFs analyses showed a statistically significant difference (p = 0.033) between the onomatopoetic and non-onomatopoetic condition around 240 ms after word onset with larger amplitudes for onomatopoetic words (Figure 3 and Figure 4). The difference emerged on centro-parietal channels and then shifted to slightly right lateralised sites.
4. Discussion
Accuracy results showed that the participants did semantically process the words in the given time. Reaction time was shorter for onomatopoetic in comparison to non-onomatopoetic verbs, even though familiarity was significantly lower for onomatopoetic verbs and should thus increase reaction time. This suggests that onomatopoetic words are easier to understand, possibly depending on the non-arbitrary link between the word sound and its meaning. In contrast, the oscillatory and the ERFs patterns of activation seem to indicate a more effortful processing of onomatopoetic verbs. In a behavioural study also applying auditory onomatopoetic versus control verbs, no difference in reaction time emerged [29]. Since in that study the task consisted in distinguishing words from pseudo-words, a possible difference in processing ease was suggested to be obscured by task-related decision making and motor processes, which might require more time than the lexical processing. This suggests that semantic versus lexical processing, which reflects the depth of linguistic processing, may be responsible for the emergence of behavioural effects. A role of the depth of semantic processes in the emergence of embodiment effects was indeed shown in a previous study of our group, where semantic discrimination impacted the modulation of verb processing as induced by electrical stimulation [42]. However, differences in reaction time in the current study should be interpreted with caution, since our task was not a simple reaction time task as in Peeters’ study.
Both onomatopoetic and non-onomatopoetic words showed alpha and beta desynchronization starting at about 200 ms after word onset in the left temporal lobe: this result adds evidence to the role of alpha and beta desynchronization as a marker of semantic processing. Although not reaching statistical significance, the slightly decreased alpha and beta power accompanying onomatopoetic verbs in the selected left temporal channels suggests that this linguistically predominant hemisphere might be sensitive to onomatopoeias. Similarly, increased left temporal beta desynchronization accompanies words implying loud vs. quiet actions [14]. On the base of these results, onomatopoetic verbs were expected to cause a stronger recruitment of the auditory cortex due to their linking function between semantics and phonetics. The synchronization visible in the alpha band around 200–400 ms and in the low beta band around 400–500 ms is more difficult to explain. It was not expected to be a marker of increased cortical engagement in the context of embodied semantics, but considering its latency, we cannot exclude a relation to particular semantic diverging aspects between the two conditions. Beta oscillations in particular are also related to complex linguistic sub-processes, to expectancy violation and attention as well as to working memory [43]. Whether familiarity, which was rated higher for non-onomatopoetic words, might be responsible for this effect, remains unclear. One limitation of the current study is that additional word-related parameters such as imageability, age of acquisition and emotional valence were not rated and controlled for. Possibly, even more linguistic parameters might affect ERF amplitude or brain oscillations; this needs to be further determined with studies specifically designed for this purpose. To our knowledge, this is the first study addressing oscillatory correlates of onomatopoetic versus non-onomatopoetic verb processing, and we cannot report a significant difference in brain oscillations. Previous studies using interjections compared to verbs point to stronger onomatopoetic qualities of these words and to a stronger activation of the auditory cortex. This might be an explanation as to why our word stimuli with weaker onomatopoetic qualities did not engage the auditory cortex as much as previously used stimuli. Although previous studies have matched interjections and control words for imageability, familiarity and age of acquisition [24], the two conditions included different grammatical categories. The use of verbs in the present study allowed a better control of grammatical aspects as well as of other related parameters such as length, word frequency and loudness. By controlling for linguistically confounding effects, we improved the comparability between conditions. Increasing semantic task difficulty might help determining a neurophysiological effect of this subtle semantic quality that is the onomatopoeia. It is worth noting that half of the words used in our study described events that were not primarily associated with human actions, but more with environmental events (e.g., “surren”—to whir, “zischen”—to hiss and “plaetschern”—to platter). Since environmental events and human actions were balanced between conditions, and the sound source should not have affected results. Still, it might have impaired simulation processes by moving the attentional focus to an extra-personal space. Verbs related to actions in which participants can envision themselves as actors are likely to induce stronger simulation.
ERF analysis showed a significant effect emerging at about 240 ms after word onset in the centro-parietal sensors, suggesting increased cortical activation related to onomatopoetic verbs. This hints at a more effortful processing of onomatopoetic verbs: as proposed in a previous study [28], onomatopoetic verbs have a duality of lexical and sound components, which creates a processing conflict. Peeters [28] argued that this is compensated by an easier understanding due to the link between the word content and the way the word is pronounced. While this was not confirmed by the behavioral results, the current findings point in that direction and show faster reaction times following onomatopoetic verbs despite the jittered time interval between the word and prompt onset.
The current results are in line with those of EEG studies showing differences in the ERPs when comparing acoustically presented onomatopoetic verbs to control verbs [28] as well as comparing visually presented ideophones (which are regarded as either very similar to or as the same as interjections) to control adverbs [23]. Peeters [28] found a significant amplitude decrease of the N2 component, a less negative-going N400 and a late-positive deflection compared to the control words distributed over all cortical areas. Lockwood and Tuomainen [23] found ERP effects at roughly the same time points as Peeters [28], but with a more negative going N400 for ideophones than for control words. We found significant differences in ERFs at about 240 ms after stimulus onset. This result might depend on similar mechanisms as those related to P2 modulation in Lockwood and Tuomainen’s [23] study, that is, the load of sensory (auditory) information embedded in onomatopoetic word. There was no significant late-positivity effect as in the two mentioned studies in our data [23,28]; however, the interpretation of more effortful retrieval might as well be dependent on the use of ideophones, and the enhanced difficulty of making meta lexical decisions [28] is fairly task-specific.
Clinical Applications
Possible clinical applications of the grounded cognition framework have been previously proposed [44]. It was proposed that patients with aphasia and lesions in motor areas could benefit from cognitive training with words that imply movement. This might add to conventional movement therapies and is supposed to induce neuroplasticity and regeneration in the affected areas. The effects of linguistic cognitive training on neural plasticity have been shown in healthy volunteers, thus delivering encouraging results [45]. First clinical tests have also been performed, but only as proofs of concepts and not in large cohorts of patients [46]. A similar cognitive improvement might be aimed at in patients with aphasia and lesions in auditory areas by applying linguistic training with sound-related words. The current ERFs results suggest that onomatopoetic verbs might suit such cognitive therapy programs.
Conceptualization, D.R., A.K., A.S., K.B.-R. and V.N.; methodology, D.R., A.K., K.B.-R. and V.N.; software, D.R., A.K. and V.N.; validation, D.R., A.K. and V.N.; formal analysis, D.R. and V.N.; investigation, D.R. and V.N.; resources, A.S. and K.B.-R.; data curation, D.R. and V.N.; writing—original draft preparation, D.R.; writing—review and editing, D.R., A.K., A.S., K.B.-R. and V.N.; visualization, D.R. and V.N.; supervision, K.B.-R. and V.N.; project administration, A.S. and K.B.-R.; funding acquisition, D.R., A.S. and K.B.-R. All authors have read and agreed to the published version of the manuscript.
This work was supported by the German Research Foundation (DFG project number 192776181-SFB991-B03), and the APC was funded by Universitäts- und Landesbibliothek Düsseldorf.
The study was in accordance with the Declaration of Helsinki and was approved by the local Ethics Committee of the Medical Faculty of the Heinrich Heine University, Duesseldorf (study number 4814R). Participants received financial compensation for their participation.
Informed consent was obtained from all subjects involved in the study.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 2. (a) Grand average time–frequency representations of the averaged selected left temporal channels for (a) the onomatopoetic verb condition, (b) the non-onomatopoetic verb condition and (c) the difference between onomatopoetic and non-onomatopoetic verb condition.
Figure 2. (a) Grand average time–frequency representations of the averaged selected left temporal channels for (a) the onomatopoetic verb condition, (b) the non-onomatopoetic verb condition and (c) the difference between onomatopoetic and non-onomatopoetic verb condition.
Figure 3. Statistical results of ERFs analysis: channels showing a significant effect (*) in the shown time interval.
Figure 4. Averaged ERF amplitudes for onomatopoetic verbs and non-onomatopoetic verbs until 600 ms after word onset across all channels showing a significant effect (see Figure 3).
Supplementary Materials
The following supporting information can be downloaded at:
References
1. Barsalou, L.W. Grounded Cognition. Annu. Rev. Psychol.; 2008; 59, pp. 617-645. [DOI: https://dx.doi.org/10.1146/annurev.psych.59.103006.093639]
2. Aziz-Zadeh, L.; Wilson, S.M.; Rizzolatti, G.; Iacoboni, M. Congruent Embodied Representations for Visually Presented Actions and Linguistic Phrases Describing Actions. Curr. Biol.; 2006; 16, pp. 1818-1823. [DOI: https://dx.doi.org/10.1016/j.cub.2006.07.060]
3. Boulenger, V.; Hauk, O.; Pulvermüller, F. Grasping Ideas with the Motor System: Semantic So-matotopy in Idiom Comprehension. Cereb. Cortex; 2009; 19, pp. 1905-1914. [DOI: https://dx.doi.org/10.1093/cercor/bhn217]
4. Kemmerer, D.; Castillo, J.G.; Talavage, T.; Patterson, S.; Wiley, C. Neuroanatomical distribution of five semantic components of verbs: Evidence from fMRI. Brain Lang.; 2008; 107, pp. 16-43. [DOI: https://dx.doi.org/10.1016/j.bandl.2007.09.003]
5. Klepp, A.; Weissler, H.; Niccolai, V.; Terhalle, A.; Geisler, H.; Schnitzler, A.; Biermann-Ruben, K. Neuromagnetic hand and foot motor sources recruited during action verb processing. Brain Lang.; 2014; 128, pp. 41-52. [DOI: https://dx.doi.org/10.1016/j.bandl.2013.12.001]
6. Niccolai, V.; Klepp, A.; Weissler, H.; Hoogenboom, N.; Schnitzler, A.; Biermann-Ruben, K. Grasping Hand Verbs: Oscillatory Beta and Alpha Correlates of Action-Word Processing. PLoS ONE; 2014; 9, e108059. [DOI: https://dx.doi.org/10.1371/journal.pone.0108059]
7. Rüschemeyer, S.-A.; Brass, M.; Friederici, A.D. Comprehending Prehending: Neural Correlates of Processing Verbs with Motor Stems. J. Cogn. Neurosci.; 2007; 19, pp. 855-865. [DOI: https://dx.doi.org/10.1162/jocn.2007.19.5.855]
8. Tettamanti, M.; Buccino, G.; Saccuman, M.C.; Gallese, V.; Danna, M.; Scifo, P.; Fazio, F.; Rizzolatti, G.; Cappa, S.F.; Perani, D. Listening to Action-related Sentences Activates Fronto-parietal Motor Circuits. J. Cogn. Neurosci.; 2005; 17, pp. 273-281. [DOI: https://dx.doi.org/10.1162/0898929053124965]
9. Kiefer, M.; Sim, E.-J.; Herrnberger, B.; Grothe, J.; Hoenig, K. The Sound of Concepts: Four Markers for a Link between Auditory and Conceptual Brain Systems. J. Neurosci.; 2008; 28, pp. 12224-12230. [DOI: https://dx.doi.org/10.1523/JNEUROSCI.3579-08.2008]
10. Cao, L.; Klepp, A.; Schnitzler, A.; Gross, J.; Biermann-Ruben, K. Auditory perception modulated by word reading. Exp. Brain Res.; 2016; 234, pp. 3049-3057. [DOI: https://dx.doi.org/10.1007/s00221-016-4706-5]
11. Engel, A.K.; Fries, P. Beta-band oscillations—signalling the status quo?. Curr. Opin. Neurobiol.; 2010; 20, pp. 156-165. [DOI: https://dx.doi.org/10.1016/j.conb.2010.02.015]
12. Pfurtscheller, G.; Stancák, A.; Neuper, C. Event-related synchronization (ERS) in the alpha band—An electrophysi-ological correlate of cortical idling: A review. Int. J. Psychophysiol.; 1996; 24, pp. 39-46. [DOI: https://dx.doi.org/10.1016/S0167-8760(96)00066-9]
13. Weisz, N.; Hartmann, T.; Müller, N.; Lorenz, I.; Obleser, J. Alpha Rhythms in Audition: Cognitive and Clinical Perspectives. Front. Psychol.; 2011; 2, 73. [DOI: https://dx.doi.org/10.3389/fpsyg.2011.00073]
14. Niccolai, V.; Klepp, A.; van Dijk, H.; Schnitzler, A.; Biermann-Ruben, K. Auditory cortex sensitivity to the loudness attribute of verbs. Brain Lang.; 2020; 202, 104726. [DOI: https://dx.doi.org/10.1016/j.bandl.2019.104726]
15. Han, J.-H.; Choi, W.; Chang, Y.; Jeong, O.-R.; Nam, K. Neuroanatomical Analysis for Onomatopoeia and Phainomime Words: fMRI Study. Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science; Wang, L.; Chen, K.; Ong, Y.S. Springer: Berlin/Heidelberg, Germany, 2005; 3610, [DOI: https://dx.doi.org/10.1007/11539087_115]
16. Hinton, L. Transferred to digital printing. Sound Symbolism; Cambridge University Press: Cambridge, UK, 1997.
17. Osaka, N. Walk-related mimic word activates the extrastriate visual cortex in the human brain: An fMRI study. Behav. Brain Res.; 2009; 198, pp. 186-189. [DOI: https://dx.doi.org/10.1016/j.bbr.2008.10.042]
18. Osaka, N. Ideomotor response and the neural representation of implied crying in the human brain: An fMRI study using onomatopoeia1. Jpn. Psychol. Res.; 2011; 53, pp. 372-378. [DOI: https://dx.doi.org/10.1111/j.1468-5884.2011.00489.x]
19. Osaka, N.; Osaka, M. Gaze-related mimic word activates the frontal eye field and related network in the human brain: An fMRI study. Neurosci. Lett.; 2009; 461, pp. 65-68. [DOI: https://dx.doi.org/10.1016/j.neulet.2009.06.023]
20. Osaka, N.; Osaka, M.; Kondo, H.; Morishita, M.; Fukuyama, H.; Shibasaki, H. An emotion-based facial expression word activates laughter module in the human brain: A functional magnetic resonance imaging study. Neurosci. Lett.; 2003; 340, pp. 127-130. [DOI: https://dx.doi.org/10.1016/S0304-3940(03)00093-4]
21. Osaka, N.; Osaka, M.; Morishita, M.; Kondo, H.; Fukuyama, H. A word expressing affective pain activates the anterior cingulate cortex in the human brain: An fMRI study. Behav. Brain Res.; 2004; 153, pp. 123-127. [DOI: https://dx.doi.org/10.1016/j.bbr.2003.11.013]
22. Hashimoto, T.; Usui, N.; Taira, M.; Nose, I.; Haji, T.; Kojima, S. The neural mechanism associated with the processing of onomatopoeic sounds. NeuroImage; 2006; 31, pp. 1762-1770. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2006.02.019]
23. Lockwood, G.; Tuomainen, J. Ideophones in Japanese modulate the P2 and late positive complex responses. Front. Psychol.; 2015; 6, 933. [DOI: https://dx.doi.org/10.3389/fpsyg.2015.00933] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26191031]
24. Kanero, J.; Imai, M.; Okuda, J.; Okada, H.; Matsuda, T. How Sound Symbolism Is Processed in the Brain: A Study on Japanese Mimetic Words. PLoS ONE; 2014; 9, e97905. [DOI: https://dx.doi.org/10.1371/journal.pone.0097905] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24840874]
25. Manfredi, M.; Cohn, N.; Kutas, M. When a hit sounds like a kiss: An electrophysiological exploration of semantic processing in visual narrative. Brain Lang.; 2017; 169, pp. 28-38. [DOI: https://dx.doi.org/10.1016/j.bandl.2017.02.001] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28242517]
26. Egashira, Y.; Choi, D.; Motoi, M.; Nishimura, T.; Watanuki, S. Differences in Event-Related Potential Responses to Japanese Onomatopoeias and Common Words. Psychology; 2015; 06, pp. 1653-1660. [DOI: https://dx.doi.org/10.4236/psych.2015.613161]
27. Cummings, A.; Čeponienė, R.; Koyama, A.; Saygin, A.; Townsend, J.; Dick, F. Auditory semantic networks for words and natural sounds. Brain Res.; 2006; 1115, pp. 92-107. [DOI: https://dx.doi.org/10.1016/j.brainres.2006.07.050] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16962567]
28. Peeters, D. Processing consequences of onomatopoeic iconicity in spoken language comprehension. Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016): Cognitive Science Society; Philadelphia, PA, USA, 10–13 August 2016; pp. 1632-1647.
29. Oldfield, R.C. The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia; 1971; 9, pp. 97-113. [DOI: https://dx.doi.org/10.1016/0028-3932(71)90067-4]
30. Knecht, S.; Dräger, B.; Deppe, M.; Bobe, L.; Lohmann, H.; Flöel, A.; Ringelstein, E.-B.; Henningsen, H. Handedness and hemispheric language dominance in healthy humans. Brain; 2000; 123, pp. 2512-2518. [DOI: https://dx.doi.org/10.1093/brain/123.12.2512]
31. Perani, D.; Dehaene, S.; Grassi, F.; Cohen, L.; Cappa, S.F.; Dupoux, E.; Fazio, F.; Mehler, J. Brain processing of native and foreign languages. NeuroReport; 1996; 7, pp. 2439-2444. [DOI: https://dx.doi.org/10.1097/00001756-199611040-00007]
32. Sakamoto, M.; Ueda, Y.; Doizaki, R.; Shimizu, Y. Communication Support System Between Japanese Patients and Foreign Doctors Using Onomatopoeia to Express Pain Symptoms. J. Adv. Comput. Intell. Intell. Inform.; 2014; 18, pp. 1020-1025. [DOI: https://dx.doi.org/10.20965/jaciii.2014.p1020]
33. Van Casteren, M.; Davis, M.H. Match: A program to assist in matching the conditions of factorial experiments. Behav. Res. Methods; 2007; 39, pp. 973-978. [DOI: https://dx.doi.org/10.3758/BF03192992]
34. Oostenveld, R.; Fries, P.; Maris, E.; Schoffelen, J.-M. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Comput. Intell. Neurosci.; 2010; 2011, 156869. [DOI: https://dx.doi.org/10.1155/2011/156869] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21253357]
35. R Development Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2013; Available online: https://www.R-project.org/ (accessed on 28 January 2019).
36. Jung, T.-P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T.J. Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects. Clin. Neurophysiol.; 2000; 111, pp. 1745-1758. [DOI: https://dx.doi.org/10.1016/S1388-2457(00)00386-2]
37. Maris, E.; Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods; 2007; 164, pp. 177-190. [DOI: https://dx.doi.org/10.1016/j.jneumeth.2007.03.024] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17517438]
38. Shtyrov, Y.; Hauk, O.; Pulvermüller, F. Distributed neuronal networks for encoding category-specific semantic information: The mismatch negativity to action words. Eur. J. Neurosci.; 2004; 19, pp. 1083-1092. [DOI: https://dx.doi.org/10.1111/j.0953-816X.2004.03126.x]
39. Assadollahi, R.; Rockstroh, B. Neuromagnetic brain responses to words from semantic sub-and supercategories. BMC Neurosci.; 2005; 6, 57. [DOI: https://dx.doi.org/10.1186/1471-2202-6-57]
40. Ortigue, S.; Michel, C.M.; Murray, M.M.; Mohr, C.; Carbonnel, S.; Landis, T. Electrical neuroimaging reveals early generator modulation to emotional words. NeuroImage; 2004; 21, pp. 1242-1251. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2003.11.007]
41. Kelly, A.C.; Uddin, L.Q.; Biswal, B.B.; Castellanos, F.X.; Milham, M.P. Competition between functional brain networks mediates behavioral variability. Neuroimage; 2008; 39, pp. 527-537. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2007.08.008]
42. Niccolai, V.; Klepp, A.; Indefrey, P.; Schnitzler, A.; Biermann-Ruben, K. Semantic discrimination impacts tDCS modulation of verb processing. Sci. Rep.; 2017; 7, 17162. [DOI: https://dx.doi.org/10.1038/s41598-017-17326-w]
43. Weiss, S.; Mueller, H.M. “Too Many betas do not Spoil the Broth”: The Role of Beta Brain Oscillations in Language Processing. Front. Psychol.; 2012; 3, 201. [DOI: https://dx.doi.org/10.3389/fpsyg.2012.00201]
44. Pulvermüller, F.; Berthier, M.L. Aphasia therapy on a neuroscience basis. Aphasiology; 2008; 22, pp. 563-599. [DOI: https://dx.doi.org/10.1080/02687030701612213]
45. Ghio, M.; Locatelli, M.; Tettamanti, A.; Perani, D.; Gatti, R.; Tettamanti, M. Cognitive training with action-related verbs induces neural plasticity in the action representation system as assessed by gray matter brain morphometry. Neuropsychologia; 2018; 114, pp. 186-194. [DOI: https://dx.doi.org/10.1016/j.neuropsychologia.2018.04.036] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29723600]
46. Durand, E.; Berroir, P.; Ansaldo, A.I. The Neural and Behavioral Correlates of Anomia Recovery following Personalized Observation, Execution, and Mental Imagery Therapy: A Proof of Concept. Neural Plast.; 2018; 2018, 5943759. [DOI: https://dx.doi.org/10.1155/2018/5943759] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30154837]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Grounded cognition theory postulates that cognitive processes related to motor or sensory content are processed by brain networks involved in motor execution and perception, respectively. Processing words with auditory features was shown to activate the auditory cortex. Our study aimed at determining whether onomatopoetic verbs (e.g., “tröpfeln”—to dripple), whose articulation reproduces the sound of respective actions, engage the auditory cortex more than non-onomatopoetic verbs. Alpha and beta brain frequencies as well as evoked-related fields (ERFs) were targeted as potential neurophysiological correlates of this linguistic auditory quality. Twenty participants were measured with magnetoencephalography (MEG) while semantically processing visually presented onomatopoetic and non-onomatopoetic German verbs. While a descriptively stronger left temporal alpha desynchronization for onomatopoetic verbs did not reach statistical significance, a larger ERF for onomatopoetic verbs emerged at about 240 ms in the centro-parietal area. Findings suggest increased cortical activation related to onomatopoeias in linguistically relevant areas.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich-Heine University, 40225 Duesseldorf, Germany;
2 Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich-Heine University, 40225 Duesseldorf, Germany;