1. Introduction
Emotion recognition is omnipresent in social interactions [1] and represents an important social competence [2]. Faces provide relevant clues for the recognition of emotions [2,3]. One explanation of the facial recognition of emotions is provided by the Facial Feedback Hypothesis (FFH) [4]. The present study therefore compares stroke patients with vs. without unilateral central facial paresis, i.e., the partial inability to perform facial movements [5], in order to test the FFH prediction of a specific deficit of visual facial emotion recognition in individuals with central facial paresis.
Emotion Processing and the Role of Facial Feedback
Facial emotion expressions are part of nonverbal communication [3] and are regarded as some of the most important nonverbal features in the identification of emotions [6]. Facial expression can be highly variable due to the precise control of the different facial muscles [1] and their voluntary or affective control [7], although the basic emotions framework considers a set of emotions to be highly elementary, unique and independent of culture, time and place [8]. These basic emotions are: anger, disgust, fear, joy, sadness and surprise [9,10]. Each of the basic emotions is characterized by specific patterns of facial muscle activities [8,11]. These congenital, ubiquitous basic emotions [12] are typically used to observe (facial) emotion recognition [13].
The accuracy of emotion recognition varies, depending on the particular emotion presented. Joy is detected significantly more accurately and quickly than all other basic emotions, whereas fear is detected significantly less accurately and more slowly than the other emotions [14]. The basic emotions of surprise and anger, as well as disgust and sadness, are similarly well-identified in terms of accuracy (performance listed in descending order) [14]. Besides differences per emotion, emotion recognition depends on sex and age. Women are faster at facial emotion recognition than men [15]. With increasing age, emotion recognition performance decreases [16]. It has not yet been conclusively clarified whether the processing of emotions is innate [4,17] or whether a concept of emotions must first be learned [18]. A combination is also conceivable, if basic emotions are considered as biologically anchored [12] and innate [17], while all of the other, more complex emotions [8] have to be learned first [12]. The localisation of emotion processing is also a matter of controversy, with evidence for right, left, or left and right hemispheric activation [19]. Dominance of the right hemisphere has been described historically [20], whereas recent evidence has highlighted a combination of different neuronal networks with different lateralization [19].
In emotion processing, the importance of afferent information from the body is emphasised, e.g., facial expression [18]. In this sense, the FFH provides a theoretical account of the process of facial emotion recognition. It postulates that other persons’ emotions are recognised by one’s own facial information [4]. The decoding requires the imitation of the facial expression of the other person and the corresponding proprioceptive facial feedback [21,22] (‘facial reflex’ is a synonym for ‘facial feedback’ [11]). Neal and Chartrand [22] summarised the working steps of the FFH: (1) imitation of the facial expression of the communication partner (discrete, unconscious, fast, automated and specific to the emotion); (2) transmission of afferent information from the face to the brain; and (3) experience and recognition of the emotion [22].
Whereas a person’s spontaneous, quick and unobtrusive imitation with their own face is basically unproblematic [23], pathological conditions affecting facial integrity may affect the abilities to initiate or imitate basic emotions’ corresponding facial expressions. Such conditions include, for example, facial paresis, a unilateral or bilateral palsy of the facial musculature following a peripheral or central defect [24]. The central form of facial paresis considered in this study typically presents unilaterally, contralateral to the central lesion [25], after stroke [26].
Whether and precisely what role facial feedback plays in emotion recognition has not yet been conclusively clarified. For example, different research results show evidence for and against the FFH in the case of limited facial feedback (due to illness or artificially provoked).
Significant deficits in facial emotion recognition were reported by Konnerth et al. [27] and Storbeck et al. [28] in patients with peripheral facial paresis/paralysis. Konnerth et al. [27] reported that patients achieved lower accuracy values than healthy controls, although the difference was not significant. Storbeck et al. [28] also detected that accuracy in facial emotion recognition did not differ significantly between patients with facial paresis and healthy controls. However, visual emotion recognition was significantly slower compared to the control subjects in both studies [27,28]. More specifically, Korb et al. [29] reported differences depending on the paralysed side of the face, with facial emotion recognition being more affected in patients with left-sided rather than right-sided facial palsy. Such findings might be taken as supportive evidence for the FFH, as persons with intact feedback show faster facial emotion recognition times [22,30,31,32,33]. This reduced accuracy of emotion recognition in patients with peripheral facial palsy could be explained by Niedenthal et al. [33], according to whom self-experienced emotions can be recognized earlier than those that are not self-perceived [33]. In contrast, Keillor et al. [34] did not report differences in the accuracy of emotion naming, discrimination or matching tasks in their single case study of a patient with bilateral facial paralysis in Guillain–Barré syndrome, nor did Bogart and Matsumoto [35] report facial emotion recognition deficits in patients with congenital bilateral facial paresis in Moebius syndrome. However, Calder et al. [36] did observe differences in the accuracy of emotion recognition with respect to at least one basic emotion in patients with Moebius syndrome.
A different way of investigating facial feedback in healthy participants is with an injection of botulinum toxin in the facial muscles for temporarily paralysis. Different studies using this method showed changed emotion recognition in terms of accuracy and time [22,32]. The results may point to a direct link between facial feedback and emotion processing [32].
Besides limited facial movements due to experimental induction and peripheral facial palsy, other disorders could also affect (1) facial movements and (2) facial emotion recognition—for instance, central facial palsy after stroke and Parkinson’s disease. Stroke occurs suddenly due to disturbed blood flow and oxygen deficiency (ischemic stroke) or bleeding (hemorrhagic stroke) in the brain and leads to individual disabilities [37], whereas Parkinson’s disease is a neurodegenerative disorder involving loss of dopamine in the substantia nigra, resulting in typical symptoms of rigor, tremor and bradykinesia [38]. Both central facial palsy after stroke [26,39] and Parkinson’s disease [40,41,42] could result in similar effects, i.e., reduced facial expression and therefore reduced facial feedback. Following the FFH, facial feedback due to facial integrity is needed for facial emotion recognition [23]. Both in stroke [43] and in Parkinson’s disease [41], facial emotion recognition could be impaired. However, there is not necessarily a direct correlation between limitations in facial expression and facial emotion recognition, at least in Parkinson’s disease [41].
In summary, there is evidence that patients with limited facial feedback and facial mimicry abilities (e.g., in peripheral facial paresis) are potentially affected by limited facial emotion recognition. To date, to the best of our knowledge, patients with peripheral facial palsy have been studied, whereas patients with central facial palsy have been overlooked.
The care of patients with central facial palsy is insufficient and rehabilitation guidelines are required [44]. To improve treatment and establish guidelines, deficits or remaining abilities must be identified first. To this end, we designed a study to find proof of facial emotion recognition abilities in patients with central facial palsy.
Consequently, the aim of the study was to test facial emotion recognition in patients with central facial paresis after stroke in terms of accuracy and time with visually presented, i.e., facial, stimuli presented by healthy subjects. Testing different modalities (facial and auditory) in two patient groups (with or without facial paresis after stroke) allows assessment of whether there is a general deficit in emotion recognition—which is a possibility after stroke [43]—or whether only one particular modality is (more) affected. If there are no deficits in emotion recognition at all, i.e., if the performance is comparable to that of healthy control subjects, it can assume that emotion recognition may be intact. Accordingly, the primary research question was: Can patients with central facial paresis after stroke recognise facial emotions?
2. Materials and Methods
2.1. Participants
Three groups of participants were considered for this study: (1) patients with unilateral central facial paresis after stroke, (2) patients without facial paresis after stroke and (3) healthy subjects. The data for the patient groups (1) and (2) were collected within the study (data are available from the authors on request), whereas the reference values for the healthy subject group (3) were already available [45,46,47] and served for an additional comparison.
The inclusion and exclusion criteria are summarised in Table 1. The patients were referred by various cooperation partners, hospitals and local practices for speech–language therapy. Recruitment and data collection took place in the period from 22 February until 14 May 2019 in Germany.
A total of 67 patients were recruited. Four of these were drop-out cases (one case: disorientation; one case: suspected bucco–facial apraxia with no possibility of assessing facial paresis; two cases: antidepressant medication with suspected altered emotional regulation). The remaining 63 patients were assigned to the study group (patients with central facial paresis, n = 34) or the control group (patients without facial paresis, n = 29) according to their diagnosis of facial paresis. Sociodemographic data and information on lesions, facial paresis, general mental capacities and aphasia for the study and control groups are given in Table A1, Table A2, Table A3, Table A5 and Table A6 (Appendix A).
The study was approved by the local ethics committee (key: EK 271/18) of the Medical Faculty at RWTH Aachen University, and all regulations of the ethics committee were implemented. All experiments were performed in accordance with the relevant guidelines and regulations. All participants signed an informed consent form after receiving detailed information.
2.2. Materials
For both facial emotion recognition and auditory emotion recognition, the same conditions were set, i.e., an item was presented (visually or auditorily) and the patients had ten seconds to respond. There were different options available as answers. The respective software systems recorded accuracy and time. For both modalities, a pre-test with ten items (initially randomized, later presented in the same order) was performed. The pre-test ensured that the task was understood [48] (see, also, Appendix B).
2.2.1. Visual Facial Emotion Recognition
In our study, we opted to use the Myfacetraining (MFT) Program (CRAFTA Cranio Facial Therapy Academy, Hamburg, Germany) [47,49], which consists of a standard test for accuracy and time taken for facial emotion recognition [47,49]. Forty-two subjects, each showing a basic emotion with their face, were presented on a screen. The person was first shown in a neutral position before changing to an emotional facial expression (basic emotion). Six additional answer options were displayed on the screen according to the basic emotion [47] (see, also, Appendix B).
2.2.2. Auditory Emotion Recognition
In addition to faces, voices (auditory) are the most important modalities in emotional communication [1]. A sub-portion of the Montreal Affective Voices (MAVs) [45] was used for the assessment. These are emotional, non-linguistic, vocal expressions of /a/ (to be compared with a as in apple, British English). Sixty items for the six basic emotions [45] were used. The Montreal Affective Voices were presented with a specially programmed experiment with the software PsychoPy, version 3.0.0b9 [50] (see, also, Appendix B).
2.2.3. Subjective Facial Emotion Recognition: Self-Assessment Questionnaires, Emotion Recognition
Coulson et al. [51] asked relatives of patients with facial paresis for their assessments of emotional recognition. Based on this, two standardized questionnaires were designed for the present study which enabled the systematic collection of subjective facial emotion recognition data. The Self-Assessment Questionnaires Emotion Recognition Accuracy and Time were used to document self-assessment of facial emotion recognition of the six basic emotions (anger, disgust, fear, joy, sadness and surprise) [51]. In order to be able to look at the evaluation in a differentiated way, one questionnaire was developed to assess accuracy and another was developed to assess time taken for facial emotion recognition. The questionnaires assess possible changes between pre-morbid and current abilities per basic emotion. The questions that featured in the questionnaires in each case were as follows: How well do you recognize the following feelings in other people’s faces? One of three answer options could be selected for each questionnaire. For Accuracy, the patient evaluated whether the basic emotion in question was more difficult, just as well as or more easily recognised than before stroke. For Time, the patient indicated whether the basic emotion was detected slower, as fast as or faster than before stroke. For deteriorations (indicated by the response options more difficult or slower), a score of −1 was assigned. If the patient did not notice any changes (response options just as well as or just as fast as), zero points (0) were recorded. For improvements (answer options easier or faster), the patient achieved a score of +1, resulting in a score between −6 and +6 per questionnaire.
2.2.4. Sunnybrook Facial Grading System for Diagnosing Facial Paresis
In order to answer the main research question, all patients were examined in a standardised way to identify possible facial paresis. Only this allowed to divide the patients into the study group (participants with central facial paresis) or the control group (participants without central facial paresis). The Sunnybrook Facial Grading System [52,53] is used for the standardised assessment for diagnosing facial paresis or paralysis, respectively. This measurement method is explicitly recommended [54]. It is also considered the current standard in the evaluation of facial paresis [55] and has been used in various studies (e.g., [54,56,57,58,59,60,61,62]). Ross et al. [52] published the original version of the Sunnybrook Facial Grading System in 1996, which was implemented in the present study (German version [53]). For this purpose, a video was made of each patient with an Apple iPod touch (camera at right angles, at the individual height of the chewing plane, 150 cm from the patient’s chin), in which the patients were asked in a standardised manner to show their face at rest or to perform an arbitrary movement with their face (raise eyebrows, close eyes gently, smile with open mouth, show teeth, pucker lips). The videos were evaluated by a speech–language therapist (see, also, Appendix B).
2.3. Statistical Analysis
Two-factorial ANOVAs with post-hoc t-tests were performed with the factors group (with vs. without facial paresis) as between-subject factor and modality (facial vs. auditory emotion recognition) as within-subject factor. Accuracy and time taken for emotion recognition were considered as dependent variables. In order to compare the empirical data obtained in the present study with normative data for healthy controls (without stroke and without facial paresis) which were already available, a series of t-tests were subsequently performed separately both for accuracy and time. To compare facial emotion recognition and auditory emotion recognition in terms of accuracy and time in patients and healthy subjects, t-tests were performed for one sample. For the comparison between patients with and without facial paresis, two-factorial ANOVAs and (post-hoc) t-tests for independent samples were run. t-tests for dependent samples were performed to compare facial emotion recognition and auditory emotion recognition in patients with and without facial paresis. To analyse subjective emotion recognition in terms of accuracy and time, one-sample t-tests were conducted. To compare accuracy and time, t-tests for dependent samples were performed.
Benjamini–Hochberg correction was applied if more than one t-test was conducted.
3. Results
The results for objective (accuracy and time) and subjectively perceived success in emotion recognition are summarised in Figure 1, Figure 2, Figure 3, Figure 4 and Table A4 (Appendix A).
3.1. Accuracy of Facial Emotion Recognition
The results of the ANOVA for accuracy were a main effect of group F(1;61) = 6.620; p = 0.013, a main effect of modality F(1;61) = 96.535; p < 0.001 and an interaction effect group x modality F(1;61) = 18.330; p < 0.001, which means that participants with central facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) compared to participants without facial paresis (t(49.425) = −3.767; p < 0.001; after correction p = 0.002) and compared to healthy controls (t(33) = −22.888; p < 0.001; after correction p = 0.002). Participants without facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) compared to healthy controls (t(28) = −10.476; p < 0.001; after correction p = 0.002), Figure 1.
3.2. Accuracy of Auditory Emotion Recognition
Participants with central facial paresis recognised auditorily presented basic emotions significantly worse (reduced accuracy) compared to healthy controls (t(33) = −13.258; p < 0.001; after correction p = 0.002). Participants without facial paresis recognised auditorily presented basic emotions significantly worse (reduced accuracy) compared to healthy controls (t(28) = −11.259; p < 0.001; after correction p = 0.002). Participants with vs. without central facial paresis did not differ significantly in auditory emotion recognition (accuracy) (t(61) = 0.616; p = 0.540; after correction p = 0.540), Figure 2.
3.3. Comparison of Accuracy of Facial and Auditory Emotion Recognition
Participants with central facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) than auditorily presented basic emotions (t(33) = −11.252; p < 0.001; after correction p = 0.002). Participants without facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) than auditorily presented basic emotions (t(28) = −3.485; p = 0.002; after correction p = 0.002).
3.4. Time Taken for Facial Emotion Recognition
The results of the ANOVA for accuracy were a main effect of group (F(1;61) = 2.797; p = 0.100), a main effect of modality (F(1;61) = 3.311; p = 0.074), and an interaction effect group × modality (F(1;61) = 3.148; p = 0.081)), which means that participants with central facial paresis did not recognise visually presented basic emotions significantly more slowly (reduced time) compared to participants without facial paresis (t(61) = 0.414; p = 0.680; after correction p = 0.680). Participants with central facial paresis recognised visually presented basic emotions significantly (not significantly after correction) faster (increased time) compared to healthy controls (t(33) = −2.442; p = 0.020; after correction p = 0.060). Participants without facial paresis recognised visually presented basic emotions significantly faster (increased time) compared to healthy controls (t(28) = −2.390; p = 0.024; after correction p = 0.036), Figure 3.
3.5. Time Taken for Auditory Emotion Recognition
Participants with vs. without central facial paresis did not differ significantly with respect to the average time taken for auditory emotion recognition (t(61) = −1.851; p = 0.069), Figure 4.
3.6. Comparison of Time Taken for Facial and Auditory Emotion Recognition
Participants with central facial paresis recognised visually presented basic emotions significantly (not significantly after correction) faster (increased time) than auditorily presented basic emotions (t(33) = −2.269; p = 0.030; after correction p = 0.060). Participants without facial paresis recognised visually presented basic emotions not significantly differently to auditorily presented basic emotions (t(28) = −0.041; p = 0.968; after correction p = 0.968).
3.7. Subjective Judgement of Emotion Recognition from the Perspective of Participants with Central Facial Paresis
Both the average accuracy of facial emotion recognition (mean = −0.71 ± 1.90) was perceived as significantly limited (t(33) = −2.167; p = 0.038; after correction p = 0.038) and the time taken for facial emotion recognition (mean = −1.91 ± 2.90) was subjectively perceived as significantly limited (t(33) = −3.849; p = 0.001; after correction p = 0.003) in participants with central facial paresis. Participants with central facial paresis judged themselves to be significantly more restricted in terms of the time taken for facial emotion recognition than in terms of accuracy (t(33) = 2.689; p = 0.011; after correction p = 0.017), Figure 5.
3.8. Further Analysis
In order to verify that the identified pattern is reasonable on the basis of these results, the following further control calculations were made.
A correlation calculation (Pearson’s product moment correlation) between objective accuracy and objective time taken for facial emotion recognition in patients with and without central facial paresis was performed. The accuracy of and the time taken for facial emotion recognition in patients with central facial paresis were positively correlated with each other (r = 0.729; p < 0.001). The average accuracy and the average time taken for facial emotion recognition in patients without facial paresis were not significantly correlated with each other (r = 0.291; p = 0.126).
Furthermore, a correlation calculation (Pearson’s product moment correlation) between objective facial emotion recognition, accuracy and severity of facial paresis using the Sunnybrook Facial Grading System across all patients (with and without facial paresis) was performed. The average accuracy of facial emotion recognition and the severity of facial paresis were significantly positively correlated with each other (r = 0.31; p = 0.014).
Moreover, a one-tailed t-test on independent samples for facial emotion recognition accuracy showed no significant difference between patients with left-sided facial paresis (mean = 26.44 ± 11.49) and right-sided facial paresis (mean = 29.25 ± 10.69), t(32) = −0.734; p = 0.234). Another one tailed t-test on independent samples for facial emotion recognition time showed no significant difference between patients with left-sided facial paresis (mean = 3.12 ± 0.48) and right-sided facial paresis (mean = 3.17 ± 0.47), t(32) = −0.322; p = 0.375.
Furthermore, a chi-squared test to compare the number of patients with limitations in general mental capacity in both groups (Table A5, Appendix A) was performed. Both groups were comparable, with x2(1, n = 63) = 0.204; p = 0.651. Another chi-squared test to compare the number of patients with aphasia in both groups (Table A6, Appendix A) was also carried out. Both groups were comparable, with x2(1, n = 63) = 1.546; p = 0.214.
Additionally, univariate and multivariate regressions, with emotion recognition (facial and auditory, accuracy and time taken) as the dependent variable and predictors diagnosis of facial paresis, sex, age, subjective judgement, general mental capacity and time post-onset as independent variables, were conducted (Table A7 and Table A8, Appendix A). Patients with facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) compared to patients without facial paresis, as calculated by means of univariate regression (beta = −0.444; p < 0.001) as well as by multivariate regression (beta = −0.353; 0.003).
4. Discussion
This study investigated visual facial emotion recognition (VFER) in patients with and without central facial paresis vs. healthy individuals. The results of our study showed that the participants with central facial paresis had significantly lower average accurate emotion recognition abilities with respect to the facial modality compared to the auditory modality. The less accurate VFER in cases of facial paresis but not in auditory emotion recognition may be due to changes in the facial feedback mechanism. Clinically, this means that VFER in persons with limited facial mimicry abilities, as in central facial paresis patients, does appear to be affected, in contrast to auditory recognition [36]. Taking into account that we did not test facial mimicry itself (i.e., facial muscle activity was not measured during the emotion-recognition task), but facial emotion recognition, facial paresis can be inferred to be one factor influencing the accuracy of objective facial emotion recognition, which may be affected by changes in the facial feedback mechanism. This may be an indication that the accuracy of objective facial emotion recognition is especially limited when facial feedback is altered by facial paresis. Auditory performance does not appear to be affected by facial paresis (for a similar finding, cf. [36]). Besides facial paresis, stroke, also, could be one factor influencing the accuracy of objective facial emotion recognition in our sample. All participants (with and without facial paresis) had had at least one stroke. Since stroke may also cause deficits in emotion recognition [43], our examined patient groups may be affected as well. These two potential factors (altered facial feedback and altered central processing due to stroke) indicate the relevance of and need to study patients without stroke but with limited facial feedback—for example, patients with peripheral facial palsy.
Our results reveal significant deficits in terms of accuracy of facial emotion recognition, in contrast with other studies that did not report any differences, e.g., [27,28,34]. This fact may be due to the large sample size (participants with facial paresis: n = 34; participants without facial paresis: n = 29) and the inclusion of different phases post-onset, with a wide range since the time of stroke (day 5 up to day 6361 post-onset). However, previous studies reported significant limitations in terms of average time taken for facial emotion recognition, e.g., [27,28], while the participants in the present study showed faster reaction times. This, in turn, could indicate that the participants after stroke replied quick and dirty [63], while they suffered from other impairments, such as deficits in attention, concentration and memory [64], in addition to the facial paresis after stroke. In order to investigate a possible systemic connection between the fast, inaccurate responses, the significant positive correlation between the objective accuracy and the objective time taken for facial emotion recognition in patients with facial paresis provides further insight: the faster a patient with facial paresis responded, the less accurate was the response, whereas no correlation was found in patients without facial paresis. This could indicate that the patients with facial paresis were themselves aware of their deficit in the time taken for facial emotion recognition (as reported in the Self-Assessment Questionnaires Emotion Recognition) but wanted to show their best performance in the test situation and therefore answered as quickly as possible.
The participants with facial paresis subjectively felt limited both in terms of parameter accuracy and time in VFER. They stated that they were more impaired with respect to time than accuracy. The participants felt that facial emotion recognition had slowed down considerably since the stroke and was somewhat less accurate. These results provide a new insight into subjective emotion recognition, as this was not considered in previous studies. However, the clinical measurement gave contradictory results and showed that the patients were clearly less accurate but faster. Thus, the measured performance appears to be controversial to the subjectively perceived performance.
In the present study, we considered the difference in facial and auditory emotion recognition shown in the results. This may support, for example, FFH, as mentioned before. Nevertheless, it should be noted that a large part of human emotion is communicated via the face and the voice, as discussed in the literature. To the best of our knowledge, this is the first clinical study which combines two different modalities in a clinical setting [65]. The mentioned factors (limitations such as deficits in attention, concentration and memory [64], besides facial paresis and emotion recognition) influenced both the study results and everyday communication in the patient groups. Although for stroke patients their survival is of primary importance [66], participation is also highly relevant, particularly in the post-acute and chronic phase [67]. Since both groups of patients showed a significant reduction in the accuracy of facial and auditory emotion recognition compared to healthy subjects, intervention recommendations for both groups and both modalities are required. Although there is limited evidence for FFH [68], FHH can be used as an explanation for assessment and rehabilitation [69].
4.1. The Relevance of Assessment of Emotion Recognition
The described results not only provide evidence for the FFH and certain effects of stroke but also have implications for the treatment of patients with central facial paresis after stroke. As early as 2013, Dobel et al. [69] called for the examination of facial emotion recognition in patients with facial paresis using basic emotions [69]. In summary, the present study supports this demand and once again advocates it.
Since the accuracy of facial emotion recognition can be impaired, especially in patients with facial paresis after stroke, appropriate assessment and therapy is recommended for this patient group. Deficits should be assessed because the performance limitations may have negative consequences on communication and may increase over time. If the performance of emotion recognition remains impaired, this can lead to the development of disorders such as alexithymia (the inability to recognise or describe one’s own emotions) [11,70]. For example, if sadness is not adequately interpreted, a patient may react defensively and thus not appropriately to a situation [6]. The effects of facial emotion recognition are therefore far-reaching and decisive for adequate social contact. The somewhat controversial results for the objective measurement and subjective assessment of facial emotion recognition in participants with facial paresis require detailed and individual examination in clinical practice. It is not sufficient to either ask the patient for his or her opinion or carry out an objective diagnosis. Both options should be taken and the results should be compared.
In addition, the lack of disease insight to be expected according to the available results (comparison between clinical measurement and subjective assessment) must become a focus of treatment in order to show the patient the relevance of facial emotion recognition therapy. This should not underestimate the importance of considering the individual wishes and goals of the patient and including them in the sense of joint decision making [71]. The basis for this is the tripartite evidence-based practice [71,72]. This ensures not only the effectiveness and efficiency of therapy, but also therapy motivation and transfer into the patient’s everyday life [71].
4.2. Limitations of the Study
The composition of the sample may be considered a limiting factor of the study. A larger and more representative, homogeneous sample tested at the same time post-onset after stroke and subdivided according to the subtypes of central facial paresis (voluntary and involuntary central facial paresis [73]) would therefore be desirable for future studies. For a more precise observation of the lesion localization and comparability of patients, imaging with detailed description of affected brain areas would be useful. In addition, statistical adjustment for different stroke locations and lesion sizes would be beneficial, as differences in emotion recognition could depend on the hemisphere affected [43]. Despite the possibility of different lesion locations and lesion sizes, the results for facial emotion recognition showed significant differences between the patient groups. Since significant effects can already be observed in our sample, we expect similar or stronger effects to be observed with more carefully selected samples with stricter inclusion criteria in further studies. Furthermore, a strong and reliable test battery to assess cognitive capacity (see [74]) is needed to differentiate deficits in emotion recognition and limitations on general mental capacity after stroke. Since emotion perception depends on general mental capacity [74,75,76], any emotion perception test measures general mental capacity to some degree. In the present study, there were comparable numbers of patients with limitations in mental capacity and aphasia, as proven by chi-squared tests. In future studies, comparability should be extended and improved by standardised diagnostics.
However, the significant positive correlation observed between objective facial emotion recognition accuracy and severity of facial paresis, calculated using the Sunnybrook Facial Grading System across all patients, points to facial paresis as the main differentiator between the two patient groups. Thus, the higher the accuracy of facial emotion recognition, the higher the score on the Sunnybrook Facial Grading System. That is, facial competence correlates with accuracy in facial emotion recognition, or the lower the facial competence, the worse the accuracy in facial emotion recognition. Moreover, significant univariate and multivariate regressions documented the relation between facial emotion recognition accuracy and facial paresis. These results demonstrate the influence of facial paresis on facial emotion recognition once more, but only in terms of accuracy. No significant differences were detected with respect to objective facial emotion recognition accuracy and time taken between patients with left- or right-sided facial paresis. If one hemisphere is dominant in emotion processing [43], patients with lesions in this dominant hemisphere with contralateral facial paresis [25] could possibly be more affected. We cannot confirm this hypothesis and previous research on facial palsy that reported that patients with left-sided facial palsy showed lower performance in facial emotion recognition compared with patients with right-sided facial palsy [29]. However, our results are in line with findings for patients with Parkinson’s disease, where facial asymmetry is not related to hemispheric dominance for emotion processing [77]. Further evidence is needed, then, to inspect possible differences in facial emotion recognition and expression depending on the side affected with facial palsy and on hemisphere.
Perfect comparability of the standard data with the sample data cannot be guaranteed without gaps—for instance, due to the age of the participants (e.g., the Montreal Affective Voices validation sample with an average age of 23.3 ± 3 years [45] vs. the patients with facial paresis with an average age of 62.6 ± 9.3 years and patients without facial paresis average with an average age of 58.4 ± 10.7 years). It must also be noted that only a small sample size of normative data (n = 29) was used for the auditory emotion recognition assessment (Montreal Affective Voices) [45]. Furthermore, the measurement of auditory and facial emotion recognition is not completely comparable. Especially with regard to the time taken for emotion recognition, it should be noted, for example, that the response modes differed (selecting an option on screen vs. pointing to a surface) and that the numbers of items and response options were not identical. As a consequence, for further research, normative data from healthy individuals should be freshly collected, with comparability extended to the patient groups. Moreover, measurement in facial and auditory emotion recognition tasks should be made even more comparable.
The separate presentation of facial and auditory items in emotion recognition assessments should also be critically questioned. Facial and auditory expressions are not necessarily independent as they can mutually influence their recognition. For example, a facial expression can be generated by moving the mouth while a vocal expression is also made [1]. However, a separation of the modalities, i.e., just visual or just auditory impressions, seemed to make sense in this study in order to differentiate and compare performances. In order to be able to answer the question reliably, this seems unavoidable. At the same time, however, this separate type of emotion recognition is far removed from everyday life and thus reduces the external validity. Equally adapted to optimal experimental conditions, static photographs instead of everyday situations were used [78]. A person is able to show up to 8000 different emotional facial expressions with his or her face [17]. However, it should be critically noted that our study only examined emotion recognition with respect to basic emotions and thus minimized the requirements compared to non-verbal communication in everyday life. It should be noted here that basic emotions can be regarded as the basis for far more complex emotions or emotional states [8]. However, since the recognition of the comparatively primitive basic emotions [8] was assessed as limited in the present study, an even worse performance can be expected for more complex emotions.
5. Conclusions
From this study, it may be concluded that:
-. After a stroke, participants with central facial paresis were significantly less accurate in visually recognising basic emotions compared with stroke patients without facial paresis and compared with a sample of healthy controls;
-. Auditory emotion recognition in both stroke groups was less accurate than in the control sample;
-. The facial emotion recognition accuracy of participants with central facial paresis was significantly worse than the auditory accuracy of emotion recognition;
-. Since visual emotion recognition was clearly worse than auditory emotion recognition in participants with facial paresis after stroke, facial mimicry probably plays an important role in communication with patients after stroke;
-. The results of our observational study may indicate the overall effects of stroke on emotion recognition and support the FFH, which is a practical and appropriate model implemented in clinical assessments and interventions;
-. Future research should investigate patients with facial palsy without stroke to further explore the impact of facial feedback on emotion recognition.
Conceptualization, A.-M.K., H.v.P. and S.H.; methodology, A.-M.K., H.v.P. and S.H.; software, A.-M.K.; validation, A.-M.K.; formal analysis, A.-M.K. and S.H.; investigation, A.-M.K.; resources, A.-M.K., H.v.P. and S.H.; data curation, A.-M.K. and S.H.; writing—original draft preparation, A.-M.K.; writing—review and editing, A.-M.K., H.v.P. and S.H.; visualization, A.-M.K.; supervision, H.v.P. and S.H.; project administration, A.-M.K.; funding acquisition A.-M.K. All authors have read and agreed to the published version of the manuscript.
All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted according to the guidelines of the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Medical Faculty at RWTH Aachen University, Germany (protocol code: EK 271/18; 11 December 2018).
Informed consent was obtained from all subjects involved in the study.
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to their having been collected as part of a larger research project that has not yet been completed.
Many thanks to all of the study participants and cooperation partners: Berufsfachschule für Logopädie an der staatlichen berufsbildenden Schule für Gesundheit und Soziales Jena, Klinikum Ingolstadt GmbH, Logopädie Sprechfreude, Dasing, Moritz Klinik GmbH & Co. KG, Bad Klosterlausnitz, Praxis für Sprach- und Stimmtherapie Hermine Gascho, Ingolstadt, Selbsthilfegruppe Aphasiker und Schlaganfall Jena des Landesverbandes Thüringen für die Rehabilitation der Aphasiker e. V., Beratungszentrum nach Schlaganfall und Hirnschädigung ZAMOR e. V. Ingolstadt and Uniklinik RWTH Aachen AöR.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Accuracy of facial emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly worse compared to healthy controls (p < 0.001) and compared to participants after stroke without facial paresis (p < 0.001). The data for healthy controls were not collected in this study but were taken from [46,47], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Figure 2. Accuracy of auditory emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly worse compared to healthy controls (p < 0.001) but did not differ significantly compared to participants after stroke without facial paresis (p = 0.540). The data for healthy controls were not collected in this study but were taken from [45], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Figure 3. Average time of facial emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly faster compared to healthy controls (p = 0.02) but did not differ significantly compared to participants after stroke without facial paresis (p = 0.68). The data for healthy controls were not collected in this study but were taken from [46,47], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Figure 4. Average time taken for auditory emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis did not differ significantly compared to participants after stroke without facial paresis (p = 0.069).
Figure 5. Accuracy and time taken in subjective facial emotion recognition (mean, median, interquartile range) in participants after stroke with facial paresis. Participants felt significantly more restricted in terms of time compared to accuracy (p = 0.011).
Inclusion and exclusion criteria.
Inclusion Criteria | Exclusion Criteria |
---|---|
Adult persons (≥18 years) with or without unilateral central facial paresis after stroke (ischemic or hemorrhagic) | Children and adults with peripheral facial paresis |
Acute, post-acute or chronic phase of stroke | Other neurological or psychological diseases |
For the investigation:
|
For the investigation:
|
Normal or corrected visual and hearing ability | |
Ability to consent | No ability to consent |
Appendix A
Sociodemographic information on gender, age, education and handedness in the study group and control groups.
Sociodemographic Information | Study Group |
Control Group |
---|---|---|
Gender | Male: n = 18; 53% | Male: n = 20; 69% |
Female: n = 16; 47% | Female: n = 9; 31% | |
Age in years | Mean = 62.65 ± 9.26 | Mean = 58.38 ± 10.72 |
Min. = 39 | Min. = 35 | |
Max. = 81 | Max. = 83 | |
Education | No school degree: | No school degree: |
n = 4; 11.77% | n = 0 | |
Sec. school certificate: | Sec. school certificate: | |
n = 9; 26.47% | n = 6; 20.69% | |
Medium maturity: | Medium maturity: | |
n = 12; 35.29% | n = 15; 51.72% | |
High school: | High school: | |
n = 9; 26.47% | n = 8; 27.59% | |
Handedness | Left: n = 0 | Left: n = 1. 3.45% |
Right: n = 33; 97.06% | Right: n = 27; 93.10% | |
Left and right: n = 1; 2.94% | Left and right: n = 1; 3.45% |
Note: n = number of participants.
Lesion information, times post-onset of the examinations in this study, type of lesion (ischaemic, hemorrhagic or both), affected hemisphere, quantity (number of lesions), limitations in general mental capacity after stroke and aphasia.
Lesion | Study Group |
Control Group |
---|---|---|
Time post-onset | Mean = 1558 (4;3) ± 2112 (5;9) | Mean = 1359 (3;9) ± 2702 (7;5) |
in days (in years;months) | Min. = 5 |
Min. = 13 |
Phase post-onset | ||
(Acute: ≤6 weeks | Acute: n = 11; 32.35% | Acute: n = 11; 37.93% |
Post-acute: <1 year | Post-acute: n = 6; 17.65% | Post-acute: n = 3; 10.34% |
Chronic: ≥1 year) | Chronic: n = 17; 50.00% | Chronic: n = 15; 51.72% |
Type | Ischemic: n = 27; 79.41% | Ischemic: n = 21; 72.41% |
Hemorrhagic: n = 5; 14.71% | Hemorrhagic: n = 6; 20.69% | |
Ischemic | Ischemic | |
and hemorrhagic: | and hemorrhagic: | |
n = 1; 2.94% | n = 1; 3.45% | |
n.a.: n = 1; 2.94% | n.a.: n = 1; 3.45% | |
Hemisphere | Left: n = 12; 35.29% | Left: n = 15; 51.72% |
Right: n = 13; 38.24% | Right: n = 6; 20.69% | |
Left and right: | Left and right: | |
n = 0 | n = 2; 6.90% | |
n.a.: n = 9; 26.47% | n.a.: n = 6; 20.69% | |
Quantity | 1x: n = 22; 64.71% | 1x: n = 25; 86.21% |
2x: n = 8; 23.53% | 2x: n = 2; 6.90% | |
3x: n = 1; 2.94% | 3x: n = 1; 3.45% | |
4x: n = 1; 2.94% | 4x: n = 0 | |
n.a.: n = 2; 5.88% | n.a.: n = 1; 3.45% | |
Limitations in general mental capacity after stroke | n = 16; 47.06% | n = 12; 41.38% |
Aphasia | n = 6; 17.65% | n = 9; 31.03% |
Note: n.a. means no information was given. n = number of participants.
Facial paresis information; diagnosis from the patients’ perspectives and from the patients’ therapists’ perspectives, according to the participant; diagnosis via Sunnybrook Facial Grading System [
Facial Paresis | Study Group |
Control Group |
---|---|---|
Diagnosis facial paresis from the patient’s perspective | Facial paresis: n = 21; 61.76%
|
Facial paresis: n = 10; 34.48%
|
Diagnosis of facial paresis from the therapist’s perspective (physiotherapy or speech and language therapy) | Facial paresis: n = 11; 32.35%
|
Facial paresis: n = 0 |
Diagnosis of facial paresis |
Mean = 73.12 ± 8.34
|
Mean = 91.21 ± 3.46 |
Time post-onset |
Mean = 827 (2;3) ± 1606 (4;5) |
Mean = 2207 (6;1) ±3709 (10;2) |
Phase post-onset |
Acute: n = 14; 41.18% |
Acute: n = 3; 10.35% |
Non-pharmaceutical therapy |
Yes: n = 9; 26.47% |
Yes: n = 0No: n = 29 |
Start | From the stroke to latest post-acute phase | From the stroke to latest post-acute phase |
Frequency | Isolated therapy units up to 1–3x/week | Individual therapy units up to 2x/week |
Duration | Max.: 3.5 months | Max.: 6 months |
Therapist | 12x speech and language therapy, |
5x speech and language therapy, |
Content | Exercises for facial expression, oral motor skills, articulation, proprioceptive neuromuscular facilitation, massage | Exercises for facial expression, oral motor skills, articulation, stretching M. buccinator |
Self-exercises | Exercises for facial expression, oral motor skills, articulation, massage, sensitivity training | Exercises for facial expressions, oral motor skills |
Note: n.a. means no information was given. n = number of participants.
The results for objective (accuracy and time) and subjectively perceived success in emotion recognition are summarised.
Emotion Recognition | Study Group |
Control Group |
Healthy Controls |
---|---|---|---|
Objective facial emotion recognition via Myfacetraining Programm, Accuracy in % | Mean = 27.77 |
Mean = 40.79 |
Mean = 71.11 |
Objective facial emotion recognition via Myfacetraining Program, Time in sec. | Mean = 3.14 |
Mean = 3.19 |
Mean = 3.34 |
Objective auditory emotion recognition via MAVs, Accuracy in % | Mean = 46.23 |
Mean = 48.05 |
Mean = 72.67 |
Objective auditory emotion recognition via MAVs, Time in sec. | Mean = 3.69 |
Mean = 3.20 |
n.a. [ |
Subjective facial emotion recognition via Self-Assessment Questionnaires Emotion Recognition Accuracy | Mean = −0.71 |
Mean = −0.03 |
n.a. |
Subjective facial emotion recognition via Self-Assessment Questionnaires Emotion Recognition Time | Mean = −1.91 |
Mean = −1.00 |
n.a. |
Note: n.a. means no information was given. n = number of participants.
Summary of facial paresis and general mental capacity information.
Study Group |
Control Group |
|
---|---|---|
With limitations in general mental capacity | n = 16 | n = 12 |
Without limitations in general mental capacity | n = 18 | n = 17 |
Types of limitation in general mental capacity | Memory: n = 10 | Memory: n = 8 |
Concentration: n = 9 | Concentration: n = 5 | |
Slowdown: n = 3 | Slowdown: n = 1 | |
Fatigue: n = 2 | Fatigue: n = 2 | |
Complex thinking: n = 1 | Complex thinking: n = 0 | |
Neglect on spec: n = 1 | Neglect on spec: n = 0 | |
Orientation in time: n = 1 | Orientation in time: n = 0 | |
Orientation in place: n = 1 | Orientation in place: n = 0 | |
Overall deterioration: n = 1 | Overall deterioration: n = 0 | |
Acalculia: n = 0 | Acalculia: n = 1 | |
Arousal: n = 0 | Arousal: n = 1 | |
Inner unrest: n = 0 | Inner unrest: n = 1 |
Note: n = number of participants. For limitations in general mental capacity, multiple deficit types per participant are possible. For this, n describes the number of limitations per group.
Summary of facial paresis and aphasia information.
Study Group |
Control Group |
|
---|---|---|
With aphasia | n = 6 | n = 9 |
Without aphasia | n = 28 | n = 20 |
Note: n = number of participants.
Univariate regression analysis.
Accuracy of Facial Emotion Recognition | ||||
---|---|---|---|---|
Standardised Beta | 95.0% Confidence Interval | p-Value | ||
Lower bound | Higher bound | |||
Diagnosis of facial paresis | −0.444 | −19.762 | −6.295 | <0.001 |
Time taken for facial emotion recognition | ||||
Diagnosis of facial paresis | −0.053 | −0.253 | 0.166 | 0.680 |
Accuracy of auditory emotion recognition | ||||
Diagnosis of facial paresis | −0.079 | −7.733 | 4.091 | 0.540 |
Time taken for auditory emotion recognition | ||||
Diagnosis of facial paresis | 0.231 | −0.040 | 1.033 | 0.069 |
Multivariate regression analysis.
Accuracy of Facial Emotion Recognition | ||||
---|---|---|---|---|
Standardised Beta | 95.0% Confidence Interval | p-Value | ||
Lower bound | Higher bound | |||
Diagnosis of facial paresis | −0.353 | −16.920 | −3.787 | 0.003 |
Sex | 0.022 | −6.306 | 7.615 | 0.851 |
Age | −0.393 | −0.891 | −0.256 | <0.001 |
Subjective judgement of accuracy | −0.014 | −2.359 | 2.110 | 0.911 |
Subjective judgement of time taken | 0.032 | −1.197 | 1.542 | 0.802 |
Limitations in general mental capacity | 0.054 | −5.213 | 8.392 | 0.641 |
Time post-onset, acute, post-acute, chronic | −0.227 | −7.417 | 0.128 | 0.058 |
Time of facial emotion recognition | ||||
Diagnosis of facial paresis | −0.029 | −0.248 | 0.201 | 0.834 |
Sex | −0.173 | −0.383 | 0.093 | 0.228 |
Age | −0.186 | −0.018 | 0.003 | 0.167 |
Subjective judgement of accuracy | 0.013 | −0.073 | 0.080 | 0.935 |
Subjective judgement of time taken | 0.057 | −0.038 | 0.055 | 0.715 |
Limitations in general mental capacity | 0.076 | −0.170 | 0.295 | 0.593 |
Time post-onset, acute, post-acute, chronic | −0.252 | −0.242 | 0.016 | 0.085 |
Accuracy of auditory emotion recognition | ||||
Diagnosis of facial paresis | 0.015 | −4.900 | 5.596 | 0.895 |
Sex | 0.082 | −3.638 | 7.488 | 0.491 |
Age | −0.428 | −0.747 | −0.239 | <0.001 |
Subjective judgement of accuracy | −0.160 | −2.894 | 0.678 | 0.219 |
Subjective judgement of time taken | 0.106 | −0.646 | 1.542 | 0.416 |
Limitations in general mental capacity | 0.068 | −3.859 | 7.015 | 0.563 |
Time post-onset, acute, post-acute, chronic | −0.374 | −7.750 | −1.720 | 0.003 |
Time of auditory emotion recognition | ||||
Diagnosis of facial paresis | 0.227 | −0.074 | 1.052 | 0.088 |
Sex | −0.050 | −0.706 | 0.489 | 0.717 |
Age | 0.153 | −0.011 | 0.044 | 0.232 |
Subjective judgement of accuracy | 0.184 | −0.073 | 0.310 | 0.220 |
Subjective judgement of time taken | −0.033 | −0.131 | 0.104 | 0.825 |
Limitations in general mental capacity | −0.173 | −0.959 | 0.209 | 0.203 |
Time post-onset, acute, post-acute, chronic | 0.205 | −0.083 | 0.565 | 0.141 |
Appendix B
Appendix B.1. Additional Information on Data Collection
Each patient was examined once. The patient was first informed about the study and about data privacy. After the declaration of informed consent, an anamnesis took place (see
Appendix B.2. Facial Emotion Recognition: Myfacetraining (MFT) Program
The Myfacetraining (MFT) Program (CRAFTA Cranio Facial Therapy Academy, Hamburg, Germany) [
By selecting an answer option (in 85% (n = 54) of cases via touchscreen, in 6.35% (n = 4) of cases via touch-pen due to hemiparesis, in 7.95% (n = 5) via mouse due to hemiparesis), the program recorded the accuracy (right or wrong answer) as well as the reaction time (in seconds). Immediately afterwards, the next screen appeared. In a standardised test, a total of 42 images of three different adult women and three different men (one person per picture) in the same order were presented. Each basic emotion was shown seven times (six basic emotions × seven images = 42 images). The time limit to respond was 10 s. If there was no response within this time, the response time was considered to have been exceeded and therefore the question was marked unanswered and the next emotion was presented. Objective facial emotion recognition was measured with respect to accuracy and time [
A pre-test with ten items was performed. The pre-test ensured that the task was understood [
With the Myfacetraining Program, normal values for 147 healthy subjects are available. Accuracy in percentages: mean = 71.11 ± 7.53; min. = 45.00; max. = 88.00. Time in seconds: mean = 3.34 ± 0.66; min. = 1.94; max. = 5.58 [
Appendix B.3. Auditory Emotion Recognition: Montreal Affective Voices
As stimuli for auditory emotion recognition, part of the Montreal Affective Voices (MAVs) [
For the presentation of the MAVs, software was available which, in addition to the accuracy of emotion recognition, also checks the intensity of the emotion but neglects the time taken [
Each participant was asked to assess an emotion by selecting a response option [
As in objective facial emotion recognition, a pre-test with ten items (initially randomised, later presented in the same order) was performed too. In addition, the examiner checked that the headphones were comfortably fitted. The volume was adjusted individually [
Standard values are available for accuracy (in percentages) of emotion recognition: mean = 72,67 ± 11.66; min. = 56.00; max. = 86.00 (see, also,
Appendix B.4. Sunnybrook Facial Grading System for Diagnosing Facial Palsy
With the Sunnybrook Facial Grading System, each face was rated in three areas by comparing the affected side of the face with the intact side. This resulted in three values: (1) Resting Symmetry Score (symmetry at rest), (2) Voluntary Movement Score (symmetry of voluntary movements) and (3) Synkinesis Score (synkinesis). With these three scores, a total score (0–100 points) was calculated. The lower the total score, the more pronounced the facial paresis respectively paralysis. The authors did not give any recommendation for a further classification according to degree of severity or the point value for a diagnosis of facial palsy actually made [
References
1. Schirmer, A.; Adolphs, R. Emotion Perception from Face, Voice, and Touch: Comparisons and Convergence. Trends Cogn. Sci.; 2017; 21, pp. 216-228. [DOI: https://dx.doi.org/10.1016/j.tics.2017.01.001] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28173998]
2. Young, A.; Perrett, D.; Calder, A.; Sprengelmeyer, R.; Ekman, P. Facial Expressions of Emotion—Stimuli and Tests (FEEST); Thames Valley Test Company: Suffolk, UK, 2002.
3. Knapp, M.L.; Hall, J.A.; Horgan, T.G. Nonverbal Communication in Human Interaction; Cengage Learning: Boston, MA, USA, 2013.
4. Ekman, P.; Oster, H. Facial Expression of Emotion. Annu. Rev. Psychol.; 1979; 30, pp. 527-554. [DOI: https://dx.doi.org/10.1146/annurev.ps.30.020179.002523]
5. Diener, H.C. 2016; Available online: https://www.pschyrembel.de/Parese/K0GCP/doc/ (accessed on 25 June 2020).
6. Radice-Neumann, D.; Zupan, B.; Tomita, M.; Willer, B. Training Emotional Processing in Persons With Brain Injury. J. Head Trauma Rehabil.; 2009; 5, pp. 313-323. [DOI: https://dx.doi.org/10.1097/HTR.0b013e3181b09160] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19858965]
7. Cattaneo, L.; Pavesi, G. The facial motor system. Neurosci. Biobehav. Rev.; 2014; 38, pp. 135-159. [DOI: https://dx.doi.org/10.1016/j.neubiorev.2013.11.002]
8. Levenson, R.W. Basic Emotion Questions. Emot. Rev.; 2011; 3, pp. 379-386. [DOI: https://dx.doi.org/10.1177/1754073911410743]
9. Ekman, P. Universal Facial Expressions of Emotion. Calif. Ment. Health Res. Dig.; 1970; 8, pp. 151-158.
10. Ekman, P. An argument for basic emotions. Cogn. Emot.; 1992; 6, pp. 169-200. [DOI: https://dx.doi.org/10.1080/02699939208411068]
11. Dimberg, U.; Thunberg, M.; Elmehed, K. Unconscious Facial Reactions to Emotional Facial Expressions. Psychol. Sci.; 2000; 11, pp. 86-89. [DOI: https://dx.doi.org/10.1111/1467-9280.00221]
12. Boloorizadeh, P.; Tojari, F. Facial expression recognition: Age, gender and exposure duration impact. Pro-Cedia Soc. Behav. Sci.; 2013; 84, pp. 1369-1375. [DOI: https://dx.doi.org/10.1016/j.sbspro.2013.06.758]
13. Williams, L.M.; Mathersul, D.; Palmer, D.M.; Gur, R.C.; Gur, R.E.; Gordon, E. Explicit identification and implicit recognition of facial emotions: I. Age effects in males and femals across 10 decades. J. Clin. Exp. Neuropsychol.; 2009; 31, pp. 257-277. [DOI: https://dx.doi.org/10.1080/13803390802255635]
14. Palermo, R.; Coltheart, M. Photographs of facial expression: Accuracy, response times, and ratings of in-tensity. Behav. Res. Methods Instrum. Comput.; 2004; 36, pp. 634-638. [DOI: https://dx.doi.org/10.3758/BF03206544] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15641409]
15. Hampson, E.; von Anders, S.M.; Mullin, L.I. A female advantage in the recognition of emotional facial ex-pressions: Test of an evolutionary hypothesis. Evol. Hum. Behav.; 2006; 27, pp. 401-416. [DOI: https://dx.doi.org/10.1016/j.evolhumbehav.2006.05.002]
16. Ruffman, T.; Henry, J.D.; Livingstone, V.; Phillips, L.H. A meta-analytic review of emotion recognition and aging: Implications for neuropsychological models of aging. Neurosci. Behav. Rev.; 2008; 4, pp. 863-881. [DOI: https://dx.doi.org/10.1016/j.neubiorev.2008.01.001] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18276008]
17. Von Piekartz, H.; Mohr, G. Reduction of head and face pain by challenging lateralization and basic emotions: A proposal for future assessment and rehabilitation strategies. J. Man. Manip. Ther.; 2014; 22, pp. 24-35. [DOI: https://dx.doi.org/10.1179/2042618613Y.0000000063]
18. Lindquist, K.A. Emotions emerge from more basic psychological ingredients: A modern psychological constructionist model. Emot. Rev.; 2013; 5, pp. 356-368. [DOI: https://dx.doi.org/10.1177/1754073913489750]
19. Palermo-Gallagher, N.; Amunts, K. A short review on emotion processing: A lateralized network o neuronal networks. Brain Struct. Funct.; 2022; 227, pp. 673-684. [DOI: https://dx.doi.org/10.1007/s00429-021-02331-7]
20. Gianotti, G. A historical review of investigations on laterality of emotions in the human brain. J. Hist. Neurosci.; 2019; 28, pp. 23-41. [DOI: https://dx.doi.org/10.1080/0964704X.2018.1524683]
21. Mohr, G.; Konnerth, V.; von Piekartz, H.J.M. Lateralitätserkennung und (emotionale) Expressionen des Gesichts—Beurteilung und Behandlung. Kiefer, Gesichts-und Zervikalregion; Thieme: Stuttgart, Germany, 2015; pp. 494-512.
22. Neal, D.T.; Chartrand, T.L. Embodied Emotion Perception: Amplifying and Dampening Facial Feedback modulates Emotion Perception Accuracy. Soc. Psychol. Personal. Sci.; 2011; 2, pp. 673-678. [DOI: https://dx.doi.org/10.1177/1948550611406138]
23. Goldman, A.I.; Sripada, C.S. Simulationist models of face-based emotion recognition. Cognition; 2005; 94, pp. 193-213. [DOI: https://dx.doi.org/10.1016/j.cognition.2004.01.005]
24. Bartolome, G. Grundlagen der Funktionellen Dysphagietherapie (FDT): Restituierende Therapieverfahren. Schluckstörungen: Diagnostik und Rehabilitation; Urban & Fischer: München, Germany, 2010; pp. 245-370.
25. Neely, J.G. Central Causes of Facial Paralysis. The Facial Nerve; Thieme: New York, NY, USA, 2014; pp. 129-136.
26. Klingner, C.M.; Witte, O.W. Central Facial Palsy. Facial Nerve Disorders and Diseases: Diagnosis and Management; Thieme: Stuttgart, Germany, 2016; pp. 358-369.
27. Konnerth, V.; Mohr, G.; von Piekartz, H. Fähigkeit von Patienten mit einer peripheren Fazialisparese zur Erkennung von Emotionen—Eine Pilotstudie. Rehabilitation; 2016; 55, pp. 19-25. [DOI: https://dx.doi.org/10.1055/s-0042-100228]
28. Storbeck, F.; Schlegelmilch, K.; Streitberger, K.-J.; Sommer, W.; Ploner, C.J. Delayed recognition of emotional facial expressions in Bell’s palsy. Cortex; 2019; 120, pp. 524-531. [DOI: https://dx.doi.org/10.1016/j.cortex.2019.07.015]
29. Korb, S.; Wood, A.; Banks, C.A.; Agoulnik, D.; Hadlock, T.A.; Niedenthal, P.M. Asymmetry of Facial Mimicry and Emotion Perception in Patients With Unilateral Facial Paralysis. JAMA Facial Plast. Surg.; 2016; 18, pp. 222-227. [DOI: https://dx.doi.org/10.1001/jamafacial.2015.2347] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26892786]
30. Kim, M.J.; Neta, M.; Davis, F.C.; Ruberry, E.J.; Dinescu, D.; Heatherton, T.F.; Stotland, M.A.; Whalen, P.J. Botulinum toxin-induced facial muscle paralysis affects amygdala responses to the perception of emotional expressions: Preliminary findings from an A-B-A design. Biol. Mood Anxiety Disord.; 2014; 4, pp. 1-8. [DOI: https://dx.doi.org/10.1186/2045-5380-4-11] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25694806]
31. Strack, F.; Martin, L.L.; Stepper, S. Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis. J. Personal. Soc. Psychol.; 1988; 54, pp. 768-777. [DOI: https://dx.doi.org/10.1037/0022-3514.54.5.768]
32. Havas, D.A.; Glenberg, A.M.; Gutwoski, K.A.; Lucarelli, M.J.; Davidson, R.J. Cosmetic Use of Botulinum Toxin-A Affects. Psychol. Sci.; 2010; 21, pp. 895-900. [DOI: https://dx.doi.org/10.1177/0956797610374742]
33. Niedenthal, P.M.; Brauer, M.; Halberstadt, J.B.; Innes-Ker, A.H. When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression. Cogn. Emot.; 2001; 15, pp. 853-864. [DOI: https://dx.doi.org/10.1080/02699930143000194]
34. Keillor, J.M.; Barrett, A.M.; Crucian, G.P.; Kortenkamp, S.; Heilman, K.M. Emotional experience and perception in the absence of facial feedback. J. Int. Neuropsychol. Soc.; 2002; 8, pp. 130-135. [DOI: https://dx.doi.org/10.1017/S1355617701020136]
35. Bogart, K.R.; Matsumoto, D. Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome. Soc. Neurosci.; 2010; 5, pp. 241-251. [DOI: https://dx.doi.org/10.1080/17470910903395692]
36. Calder, A.J.; Keane, J.; Cole, J.; Campbell, R.; Young, A.W. Facial Expression Recognition by People with Möbius Syndrome. Cogn. Neuropsychol.; 2000; 17, pp. 73-87. [DOI: https://dx.doi.org/10.1080/026432900380490]
37. Kuriakose, D.; Xiao, Z. Pathophysiology and Treatment of Stroke: Present Status und Future Perspectives. Int. J. Mol. Sci.; 2020; 21, 7609. [DOI: https://dx.doi.org/10.3390/ijms21207609]
38. Armstrong, M.J.; Okun, M.S. Diagnosis and Treatment of Parkinson Disease: A Review. Jama; 2020; 323, pp. 548-560. [DOI: https://dx.doi.org/10.1001/jama.2019.22360] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32044947]
39. Finkensieper, M.; Volk, G.F.; Guntinas-Lichius, O. Erkrankungen des Nervus facialis. Laryngo-Rhino-Otologie; 2012; 91, pp. 121-142. [DOI: https://dx.doi.org/10.1055/s-0031-1300965] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22318445]
40. Bologna, M.; Fabbrini, G.; Marsili, L.; Defazio, G.; Thompson, P.D.; Berardelli, A. Facial bradynkinesia. J. Neurol. Neurosurg. Psychiatry; 2013; 84, pp. 681-685. [DOI: https://dx.doi.org/10.1136/jnnp-2012-303993] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23236012]
41. Bologna, M.; Berardelli, I.; Paprella, G.; Marsili, L.; Ricciardi, L.; Fabbrini, G.; Berardelli, A. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson’s Disease. Front. Neurol.; 2016; 7, 230. [DOI: https://dx.doi.org/10.3389/fneur.2016.00230] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28018287]
42. Marsili, L.; Agostino, R.; Bologna, M.; Belvisi, D.; Palma, A.; Fabbrini, G.; Berardelli, A. Bradykinesia of psed smiling and voluntary movement of the lower face in Parkinson’s disease. Parkinsonism Relat. Disord.; 2014; 20, pp. 370-375. [DOI: https://dx.doi.org/10.1016/j.parkreldis.2014.01.013]
43. Yuvaraj, R.; Murugappan, M.; Norlinah, M.I.; Sundaraj, K.; Khairiyah, M. Review of Emotion Recognition in Stroke Patients. Dement. Cogn. Disord.; 2013; 36, pp. 179-196. [DOI: https://dx.doi.org/10.1159/000353440]
44. Vaughan, A.; Copley, A.; Miles, A. Physical rehabilitation of central facial palsy: A survey of current multi-disciplinary practice. Int. J. Speech-Lang. Pathol.; 2021; pp. 1-10. [DOI: https://dx.doi.org/10.1080/17549507.2021.2013533]
45. Belin, P.; Fillion-Bilodeau, S.; Gosselin, F. The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing. Behav. Res. Methods; 2008; 40, pp. 531-539. [DOI: https://dx.doi.org/10.3758/BRM.40.2.531]
46. Herzer, S.; Maigler, A. Eine Revision der Referenzwerte der sechs Basisemotionen des CRAFTA Face-Mirroring Programms. Eine Querschnittstudie; Hochschule Osnabrück: Osnabrück, Germany, 2016.
47. CRAFTA Cranio Facial Therapy Academy. Operating Guidelines CRAFTA Facemirroring Assessment and Treatment. Available online: https://www.myfacetraining.com/downloads/CRAFTA%20Operating%20Guidelines.pdf (accessed on 28 January 2019).
48. Von Piekartz, H.; Wallwork, S.B.; Mohr, G.; Butler, D.S.; Moseley, G.L. People with chronic facial pain per-form worse than controls at a facial emotion recognition task, but it is not all about the emotion. J. Oral Rehabil.; 2015; 42, pp. 243-250. [DOI: https://dx.doi.org/10.1111/joor.12249]
49. Myfacetraining. Available online: https://www.myfacetraining.com/ (accessed on 27 August 2019).
50. Peirce, J.W.; MacAskill, M.R. Building Experiments in PsychoPy; Sage: London, UK, 2018.
51. Coulson, S.E.; O’Dwyer, N.J.; Adams, R.D.; Croxson, G.R. Expression of Emotion and Quality of Life After Facial Nerve Paralysis. Otol. Neurol.; 2004; 25, pp. 1014-1019. [DOI: https://dx.doi.org/10.1097/00129492-200411000-00026]
52. Ross, B.G.; Fradet, G.; Nedzelski, J.M. Development of a sensitive clinical facial grading system. Otolaryngol. Head Neck Surg.; 1996; 114, pp. 380-386. [DOI: https://dx.doi.org/10.1016/S0194-5998(96)70206-1]
53. Neumann, T.; Lorenz, A.; Volk, G.F.; Hamzei, F.; Schulz, S.; Guntinas-Lichius, O. Validierung einer Deutschen Version des Sunnybrook Facial Grading Systems. Laryngo-Rhino-Otologie; 2017; 96, pp. 168-174. [DOI: https://dx.doi.org/10.1055/s-0042-111512] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27832680]
54. Guntinas-Lichius, O.; Finkensieper, M. Grading. Facial Nerve Disorders and Diseases: Diagnosis and Management; Thieme: Stuttgart, Germany, 2016; pp. 94-111.
55. Fattah, A.; Gurusinghe, A.; Gavilan, J.; Hadlock, T.; Markus, J.; Marres, H.; Nduka, C.; Slattery, W.; Snyder-Warwick, A. Facial Nerve Grading Instruments: Systematic Review of the Literature and Suggestion for Uniformity. Plast. Reconstr. Surg.; 2015; 135, pp. 569-579. [DOI: https://dx.doi.org/10.1097/PRS.0000000000000905] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25357164]
56. Akulov, M.A.; Orlova, A.S.; Usachev, D.J.; Shimansky, V.N.; Tanjashin, S.V.; Khatkova, S.E.; Yunosha-Shanyavskaya, A.V. IncobotulinumtoxinA treatment of facial nerve palsy after neurosurgery. J. Neurol. Sci.; 2017; 381, pp. 130-134. [DOI: https://dx.doi.org/10.1016/j.jns.2017.08.3244]
57. Beurskens, C.H.; Heymans, P.G. Mime therapy improves facial symmetry in people with long-term facial nerve paresis: A randomised controlled trail. Aust. J. Physiother.; 2006; 52, pp. 177-183. [DOI: https://dx.doi.org/10.1016/S0004-9514(06)70026-5]
58. Goo, B.; Jeong, S.M.; Kim, J.U.; Park, Y.C.; Seo, B.K.; Baek, Y.H.; Yook, T.H.; Nam, S.S. Clinical efficacy and safety of thread-embedding acupuncture for treatment of the sequelae of Bell’s palsy: A protocol for a patient-assessor blinded, randomized, controlled, parallel clinical trial. Medicine; 2019; 98, e14508. [DOI: https://dx.doi.org/10.1097/MD.0000000000014508]
59. Kim, J.; Choi, J.Y. The effect of subthreshold continuous electrical stimulation on the facial function of patients with Bell’s palsy. Acta Oto-Laryngol.; 2016; 136, pp. 100-105. [DOI: https://dx.doi.org/10.3109/00016489.2015.1083121]
60. Kuttenreich, A.-M.; Rethfeldt, W.S.; von Piekartz, H. Autobiografische Erinnerungen bei Behandlung zentraler Fazialisparesen. Forum Logopädie; 2018; 32, pp. 6-13.
61. Kwon, H.-J.; Choi, J.-Y.; Lee, M.S.; Kim, Y.-S.; Shin, B.-C.; Kim, J.-I. Acupuncture for the sequelae of Bell’s palsy: A randomized controlled trial. Trails; 2015; 16, pp. 246-253. [DOI: https://dx.doi.org/10.1186/s13063-015-0777-z]
62. Ton, G.; Lee, L.W.; Ng, H.P.; Liao, H.Y.; Chen, Y.H.; Tu, C.H.; Tseng, C.H.; Ho, W.C.; Lee, Y.C. Efficacy of laser acupuncture for patients with chronic Bell’s palsy: A study protocol for a randomized, double-blind, sham-controlled pilot trial. Medicine; 2019; 98, e15120. [DOI: https://dx.doi.org/10.1097/MD.0000000000015120]
63. Cambridge Dictionary. Quick and Dirty. 2020; Available online: https://dictionary.cambridge.org/de/worterbuch/englisch/quick-and-dirty (accessed on 9 September 2020).
64. Deutsche Gesellschaft für Allgemeinmedizin und Familienmedizin (DEGAM). DEGAM-Leitlinie Nr. 8 Schlaganfall. 2012; Available online: https://www.awmf.org/uploads/tx_szleitlinien/053-011l_S3_Schlaganfall_2012-abgelaufen.pdf (accessed on 23 June 2018).
65. Sánchez-Lozano, E.; Lopez-Otero, P.; Docio-Fernandez, L.; Argones-Rúa, E.; Alba-Castro, J.L. Audiovisual Three-Level Fusion for Continuous Estimation of Russell’s emotion Circumplex. Proceedings of the 3rd ACM International Workshop on Audio/Visual Emotion Challenge; Barcelona, Spain, 21 October 2013; pp. 31-40.
66. Bundesarbeitsgemeinschaft für Rehabilitation. Arbeitshilfe für die Rehabilitation von Schlaganfallpatienten. 1998; Available online: https://www.bar-frankfurt.de/service/publikationen/produktdetails/produkt/65.html (accessed on 23 September 2019).
67. World Health Organization (WHO). Internationale Klassifikation der Funktionsfähigkeit, Behinderung und Gesundheit (ICF). 2005; Available online: https://www.dimdi.de/dynamic/de/klassifikationen/downloads/?dir=icf (accessed on 18 September 2019).
68. Coles, N.A.; Larsen, J.T.; Lench, H.C. A Meta-Analysis of the Facial Feedback Literature: Effects of Facial Feedback on Emotion Experience Are Small and Variable. Psychol. Bull.; 2019; 145, pp. 610-651. [DOI: https://dx.doi.org/10.1037/bul0000194] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30973236]
69. Dobel, C.; Miltner, W.H.R.; Witte, O.W.; Volk, G.F.; Guntinas-Lichius, O. Emotionale Auswirkung einer Fazialisparese. Laryngo-Rhino-Otologie; 2013; 92, pp. 9-23. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23065673]
70. Taylor, G.J.; Bagby, R.M. An overview of the alexithymia construct. The Handbook of Emotional Intelligence: Theory, Development, Assessment, and Application at Home, School, and in the Workplace; Jossey-Bass: San Francisco, CA, USA, 2000; pp. 40-67.
71. Beushausen, U.; Grötzbach, H. Evidenzbasierte Sprachtherapie: Grundlagen und Praxis; Elsevier: München, Germany, 2011.
72. Dollaghan, C. The Handbook for Evidence-Based Practice in Communication Disorders; Paul H. Books: Baltimore, MD, USA, 2007.
73. Gilden, D.H. Clinical Practice: Bell’s Palsy. N. Engl. J. Med.; 2004; 351, pp. 1323-1331. [DOI: https://dx.doi.org/10.1056/NEJMcp041120] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15385659]
74. Hildebrandt, A.; Sommer, W.; Schacht, A.; Wilhelm, O. Perceiving and remembering emotional facial ex-pressions—A basis facet of emotional intelligence. Intelligence; 2015; 50, pp. 52-67. [DOI: https://dx.doi.org/10.1016/j.intell.2015.02.003]
75. Olderbak, S.; Semmler, M.; Doebler, P. Four-branch model of ability emotion intelligence with fluid and crystallized intelligence: A meta-analysis of relations. Emot. Rev.; 2019; 11, pp. 166-183. [DOI: https://dx.doi.org/10.1177/1754073918776776]
76. Schlegel, K.; Palese, T.; Mast, M.S.; Rammsayer, T.H.; Hall, J.A.; Murphy, N.A. A meta-analysis of the relationship between emotion recognition ability and intelligence. Cogn. Emot.; 2020; 2, pp. 329-351. [DOI: https://dx.doi.org/10.1080/02699931.2019.1632801]
77. Ricciardi, L.; Visco-Comandini, F.; Erro, R.; Morgante, F.; Volpe, D.; Kilner, J.; Edwards, M.J.; Bologna, M. Emotional facedness in Parkinson’s disease. J. Neural Transm.; 2018; 125, pp. 1819-1827. [DOI: https://dx.doi.org/10.1007/s00702-018-1945-6]
78. Rosenberg, H.; McDonald, S.; Rosenberg, J.; Westbrook, R.F. Measuring emotion perception following traumatic brain injury: The Complex Audio Visual Emotion Assessment Task (CAVEAT). Neuropsychol. Rehabil.; 2019; 29, pp. 232-250. [DOI: https://dx.doi.org/10.1080/09602011.2016.1273118]
79. House, J.W.; Brackmann, D.E. Facial nerve grading system. Otolaryngol.-Head Neck Surg.; 1985; 93, pp. 146-147. [DOI: https://dx.doi.org/10.1177/019459988509300202]
80. Online Psychology Research. Montreal Affective Voices. Available online: https://experiments.psy.gla.ac.uk//index.php (accessed on 11 June 2018).
81. Paquette, S.; Peretz, I.; Belin, P. The “Musical Emotional Bursts”: A validated set of musical affect bursts to investigate auditory affective processing. Front. Psychol.; 2013; 4, 509. [DOI: https://dx.doi.org/10.3389/fpsyg.2013.00509]
82. Miller, M.Q.; Hadlock, T.A.; Fortier, E.; Guarin, D.L. The Auto-eFACE: Machine Learning-Enhanced Program Yields Automated Facial Palsy Assessment Tool. Plast. Reconstr. Surg.; 2021; 147, pp. 467-474. [DOI: https://dx.doi.org/10.1097/PRS.0000000000007572] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33235050]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The Facial Feedback Hypothesis (FFH) states that facial emotion recognition is based on the imitation of facial emotional expressions and the processing of physiological feedback. In the light of limited and contradictory evidence, this hypothesis is still being debated. Therefore, in the present study, emotion recognition was tested in patients with central facial paresis after stroke. Performance in facial vs. auditory emotion recognition was assessed in patients with vs. without facial paresis. The accuracy of objective facial emotion recognition was significantly lower in patients with vs. without facial paresis and also in comparison to healthy controls. Moreover, for patients with facial paresis, the accuracy measure for facial emotion recognition was significantly worse than that for auditory emotion recognition. Finally, in patients with facial paresis, the subjective judgements of their own facial emotion recognition abilities differed strongly from their objective performances. This pattern of results demonstrates a specific deficit in facial emotion recognition in central facial paresis and thus provides support for the FFH and points out certain effects of stroke.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
2 Department of Physical Therapy and Rehabilitation Science, Osnabrück University of Applied Sciences, Albrechtstr. 30, 49076 Osnabrück, Germany;
3 Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;