Content area
ABSTRACT
Background: Sense of competence in dementia care staff (SCIDS) may be associated with more positive attitudes to dementia among care staff and better outcomes for those being cared for. There is a need for a reliable and valid measure of sense of competence specific to dementia care staff. This study describes the development and evaluation of a measure to assess "sense of competence" in dementia care staff and reports on its psychometric properties.
Methods: The systematic measure development process involved care staff and experts. For item selection and assessment of psychometric properties, a pilot study (N = 37) and a large-scale study (N = 211) with a test-retest reliability (N = 58) sub-study were undertaken.
Results: The final measure consists of 17 items across four subscales with acceptable to good internal consistency and moderate to substantial test-retest reliability. As predicted, the measure was positively associated with work experience, job satisfaction, and person-centered approaches to dementia care, giving a first indication for its validity.
Conclusions: The SCIDS scale provides a useful and user-friendly means of measuring sense of competence in care staff. It has been developed using a robust process and has adequate psychometric properties. Further exploration of the construct and the scale's validity is warranted. It may be useful to assess the impact of training and perceived abilities and skills in dementia care. [PUBLICATION ABSTRACT]
(ProQuest: ... denotes non-US-ASCII text omitted.)
Introduction
There is a global trend toward an increase in people living with dementia. For example, recent predictions for the UK estimate that the number of people living with dementia will double from 700,000 to 1,400,000 by 2038 (Department of Health, 2009). Dementia is the most common reason for older people to enter residential care (Clare et al., 2003).
Paid caregivers often play a pivotal role in meeting the needs of the person with dementia. Kitwood's person-centered approach highlighted the responsibility paid caregivers have for the well-being of their clients in that their interactions can either improve a person's quality of life or, through a form of "malignant social psychology", exacerbate the effects of dementia and increase disability (Kitwood, 1997; Clare et al., 2003).
Despite this, research has continued to focus mainly on family caregivers (e.g. Brodaty et al., 2003) and the lack of research exploring the impact of staff factors on the person with dementia has been emphasized repeatedly (e.g. Middleton et al., 1999; Zimmerman et al., 2005; Moniz-Cook et al., 2008). The scarcity of research has partly been attributed to a shortage of appropriate measures and the absence of a consensus on published measures (Brodaty et al., 2003). For example, Moniz-Cook et al. (2008), working on a European consensus of outcome measures in dementia research, identified only five staff measures in the "care staff morale" domain. They decided that only two of these were fit to be included in their framework. However, both measures were generic and not specific to dementia care staff and their work. They concluded that there was a particular dearth of instruments measuring positive capacity in staff (Moniz-Cook et al., 2008). A literature review of measures (Schepers, 2010) identified that there is a particular lack of measures that (1) are well developed, providing information on psychometric properties; (2) are appropriate for frontline care staff including unqualified staff and staff from minority ethnic backgrounds; and (3) measure positive capacity in staff and the person with dementia.
One important factor exemplifying positive capacity is a caregiver's sense of competence, i.e. how well they feel that they can do what is needed. This has been linked to positive outcomes in the caregiver, such as less burden, greater job satisfaction, and positive attitude. Such outcomes can also be linked to positive outcomes for the person living with dementia, e.g. less institutionalization. Thus, sense of competence in family caregivers has been defined as a personal strength in the specific caregiving situation and has been associated with reduced burden and increased physical and mental health in the family caregiver (Vernooij-Dassen et al., 1997). Also, perceived competence in providing dementia care in staff has been associated with greater job satisfaction and positive attitudes to dementia (Zimmerman et al., 2005). A further link has been made between sense of competence and burden. While a higher sense of competence is expected to be linked to less burden, a higher level of burden in community-based care staff and family caregivers can lead to more institutionalization of care recipients (McCallion et al., 2005).
Furthermore, it seems that sense of competence increases with training and level of knowledge. Vernooij-Dassen et al. (2000) demonstrated that sense of competence in family caregivers is open to change and can be improved using support and training programs. Hughes et al. (2008) report that confidence in dealing with challenging behavior in staff is positively related to the amount of training and level of knowledge.
An instrument measuring sense of caregiving competence in professional caregivers would be useful both to evaluate intervention studies and to enable future researchers to explore the impact of staff factors on the care recipient. The aim of this study was to develop a "sense of competence" questionnaire that has good psychometric properties and is easy to complete by untrained frontline dementia care staff. Based on the reported literature it was hypothesized that sense of competence would be positively associated with staff job satisfaction, attitude towards dementia, work experience, and level of dementia knowledge,
Method
Measure development
In order to maximize content and face validity, the measure development process went through multiple iterative stages as suggested by Streiner and Norman (2008), including a literature review, focus groups, expert feedback, pilot study, expert judgments, item analysis, and a process to simplify the language used in the questionnaire. Reliability was tested with regard to the internal consistency and consistency over time. Construct validity was explored by analyzing the structure of the measure (factorial validity) and its association with other constructs and variables based on the literature (convergent validity). The study was approved by University College London Research Ethics Committee.
A literature review was undertaken regarding the concept of sense of competence or similar constructs within dementia care staff, identifying and analyzing existing instruments. A keyword search on "dementia", "caregiver", "competence", "self-efficacy", and "confidence", together with cross referencing, identified six papers using sense of competence/self-efficacy measures in the dementia care setting (Peterson et al., 2002; Mackenzie and Peragine, 2003; Finnema et al., 2005; Teri et al., 2005; Zimmerman et al., 2005; Davison et al., 2007; Hasson and Arnetz, 2008; Hughes et al., 2008). Based on a review of these instruments and considering Bandura's guideline for measuring self-efficacy (Bandura, 2006) a framework for the development of a new measure was established.
The term and construct of sense of competence was chosen over self-efficacy to set the measure apart from existing measures that focus on stress management skills rather then competencies (Schepers, 2010). Furthermore, questionnaire items needed to be aimed at specific dementia care roles and responsibilities. In order to extend applicability across dementia care settings (residential and community), setting-specific scenarios or vignettes were avoided. The measure was to be based on a clear definition of sense of competence that was easy to grasp for the target population. This definition was then to be extended with the help of care staff and experts in the field to cover all relevant aspects of current person-centered dementia care. The formulation and presentation of the measure needed to be neutral toward challenges of dementia care and non-stigmatizing of the work field and the person with dementia. Finally, the language of the measure had to reflect this and take the mixed educational and ethnic background of untrained frontline care staff in the UK into account.
Item generation and reduction
Two multi-ethnic groups of 7 and 13 frontline dementia care staff were recruited from a dementia care staff training organization, at the end of a six-month dementia care training course. Group 1 was presented with a definition based on dictionary entries for the terms "sense of" and "competence": "a feeling that one has the ability to do what is needed". In a brainstorming session, the group then identified the different roles and responsibilities essential to providing person-centered care. The group produced 41 different roles and responsibilities that they felt were relevant for their day-to-day work. These were presented to group 2, who were asked to add any missing roles and responsibilities and finally to sort them into domains. This increased the number of items to 52. Both groups were also consulted on design and wording of the measure. In the next step, a document setting out the results of the focus groups was sent electronically to 11 experts in the field (senior researchers, clinical psychologists, dementia trainers, and senior nurses) for feedback on the domains, roles and responsibilities, and language and to judge the relevance of these for person-centered care in dementia. The item pool was further refined and the number of items was reduced to 49 roles and responsibilities based on expert replies (N = 6). From these, a pilot questionnaire was designed using a four-point scale, asking participants to indicate how competent they feel in each of 49 roles and responsibilities: not at all competent, not really competent, somewhat competent and very competent. Two questions were added at the end of the survey: "How easy was it to answer the questions?" (1 = not at all, 5 = very easy); and "How well did you feel the questionnaire enabled you to show a true and encompassing image of how competent you feel in your work?" (1 = not at all, 5 = very much so). Data were collected at a dementia care residential home, a dementia care training organization, and a dementia conference in London. Thirty-one direct care staff (including nurses, care assistants, care workers, and support workers) and six other health care professionals (including mental capacity advisors, an assistant care home manager, a day center manager, and clinical psychologists) completed the pilot questionnaire. In addition, six experts at the dementia conference (including senior researchers, healthcare professionals, and home managers) were asked to mark the 20 most relevant items on the questionnaire and comment on the measure.
Final item selection and refinements
An item analysis based on results of the pilot study led to further pruning of the item pool. Items with an average inter-item correlation below r = 0.30, a standard deviation below 0.4, and less than three expert votes were excluded from the questionnaire, reducing the item number to 18. One more item was removed at a later stage in the main study due to little shared variance identified in the Principal Component Analysis (described below). Thus, the finalized Sense of Competence in Dementia Care Staff (SCIDS) scale consists of 17 items (Figure 1).
Figure 1.
Finalized Sense of Competence in Dementia Care Staff (SCIDS) measure.
Participants of the pilot study were also invited to comment on the language of the measure. In a final step, this feedback was analyzed and implemented together with a process of simplifying the language following recommendations by the Plain English Campaign (2001). Thus, the term "competent" was replaced with the easier formulation "how well do you feel you can. . ." - (1) Not at all, (2) A little bit, (3) Quite a lot, (4) Very much. Several items were reworded and in some cases examples were added to increase clarity. For example, item 16 was changed from "How competent do you feel in providing opportunities for choice in your everyday interactions with a person with dementia?" to "How well do you feel you can offer choice to a person with dementia in everyday care (such as what to wear, or what to do)?"
Reliability and validity study
The new measure was administered to a sample of 316 staff in dementia care. Recruitment sources were a training organization specializing in training dementia care staff, four Cognitive Stimulation Therapy training events and seven residential dementia care homes in the private and public sector in Greater London. For the training organization and events, a researcher attended to hand out questionnaires at the start and collect completed forms at the end. For care homes, the researcher visited to present and explain the study to managers and available care staff. A box with questionnaires and a sealed box for competed forms were left. After a one-week interval, test-retest data were collected from a subsample of 80 participants.
SCIDS was administered with three other questionnaires and a set of demographic questions: First, the Approaches to Dementia Questionnaire (ADQ, Lintern, 2009) was chosen to measure person-centered attitudes in staff. It is a 19-item questionnaire that uses a 5-point Likert scale: (1) strongly agree to (5) strongly disagree, with two subscales (Personhood and Hope) aimed at dementia care staff. Higher scores indicate higher level of person-centered and hopeful attitudes toward dementia. For the current study, internal consistency was indicated by Cronbach's [alpha] = 0.78 overall, [alpha] = 0.73 for the hope and [alpha] = 0.74 for the personhood subscales. Second, the Job-satisfaction Index (JSI; Firth-Cozens and Hardy, 1992) was included to assess job satisfaction of the staff. It is an 18-item questionnaire with a 7-point Likert scale (1) Extremely dissatisfying to (7) Extremely satisfying - measuring staff satisfaction with aspects such as pay, relationships with colleagues and supervisors, physical conditions, recognition from supervisors, and career opportunities. Higher scores indicate higher job satisfaction. For the current study, overall internal consistency was Cronbach's [alpha] = 0.93 and test-retest reliability was ICC = 0.64. Third, in order to capture participants' knowledge of dementia, the Dementia Knowledge - 20 (DK-20) was administered. It is a 20-item criterion-referenced multiple-choice test (Shanahan, 2010), measuring specific dementia care knowledge across two domains (Dementia Core Knowledge and Dementia Care Knowledge). Correct answers are scored "1" and incorrect "0". Higher scores indicate more knowledge. In the current study the DK-20 had an overall internal consistency of Cronbach's [alpha] = 0.63. Test-retest reliability was ICC = 0.73. Finally, demographic data included gender, age, ethnic background, job role, and years and months of experience.
Statistical analysis was undertaken using SPSS version 17.1. Due to data characteristics, an exploratory non-orthogonal Principal Component Analysis method (oblimin) was chosen to explore the structure of the scale and identify sub-scales. Cronbach's [alpha] was calculated as an indicator for internal consistency. Intra-class correlation (ICC, two-way mixed model) was used for test-retest reliability as suggested by Streiner and Norman (2008). Correlations (Spearman's Ï, Pearson's r) were undertaken to explore construct validity.
Results
Two-hundred-and-eleven questionnaires were completed and returned, giving a 67% response rate for the main sample. For the test-retest reliability study, 58 completed retest questionnaires were returned, making a response rate of 73%. The main sample and subsample consisted mainly of women (83%, 86%) with a mean age of 44 years (STD 11 years). Ethnicity of the main (subsample) were 31% (24%) White-British, 35% (43%) Black-British/African/Caribbean, 17% (21%) Asian, 12% (11%) European and 5% (2%) other. Participants worked in dementia care homes and dementia community care settings. The job roles of participants were 72% (78%) direct care staff, 32% (2%) allied health and social care professionals, 10% (12%) management staff, and 3% (9%) non-clinical staff.
Content validity
The thorough and systematic measure development process reduced the item pool from 52 roles and responsibilities across seven domains to 17 final items across four subscales optimizing item properties. Good item properties were also reflected in pilot study results. Participants felt that the questions were relatively easy to answer (1 = not at all, 5 = very easy), mean = 4.3 (SD = 0.89), and enabled them to give a true account of how competent they felt in their work (1 = not at all, 5 = very much so), mean = 4.2 (SD = 0.05), indicating good face validity and utility of the measure.
Psychometric item properties
A missing value analysis showed no patterns of missing values. The distribution of participant answers for all items was skewed toward the upper end of the scale (Table 1). Thus, answer options at the lower end of the scale (not at all and a little bit) were used less frequently than answer options at the upper end of the scale (quite a lot and very much). Sixteen out of the final 17 items had an ICC of 0.40 and above (Table 1), indicating fair to good test-retest reliability of individual items.
Table 1.
Item characteristics of the final 17 items: distribution of participant answers and test-retest reliability
* p < 0.05. ** p < 0.001.
ICC = intra-class correlation; SEM = standard error of the mean.
Construct validity
As the questionnaire was aimed at untrained frontline care staff, other participants (management staff, allied professionals and non-clinical staff) were excluded from this analysis leaving a sample size of N = 148. The Kaiser-Meyer-Olkin measure verified the sampling adequacy for the analysis (KMO = 0.87). Bartlett's test of sphericity indicated that the correlations between items were sufficiently large ([chi]2(153) = 1241.112, p < 0.000). At this point one more item was removed from the item pool - "ask a member of staff for help" - as it showed low communalities and thus little shared variance with the rest of the scale. The Kaiser's criterion indicated four components explaining 64.4% of the variance. The scree-plot supported a two- or a four-component solution. As all four factors contributed to the variance (38%, 12%, 7%, and 7%) and were nontrivial (each containing four or five items; Brown, 2009), as well as demonstrating good heuristic explanatory value, this solution was preferred over a two-factor solution (Table 2).
Table 2.
Factor loadings (>0.4) following Principal Components Analysis with oblimin rotation and a priori questionnaire domains (N = 148)
The bold typeface highlights the primary component loadings that were used to compose the components, whereas secondary but substantial loadings are included in normal type face.
The four components were interpreted as: (1) Professionalism, including roles and responsibilities such as keeping a positive attitude/motivation and being an active team member. It also included the responsibility of dealing with personal care where a professional attitude is crucial for dealing, with the intimacy and, at times, difficult feelings accompanying these types of responsibilities, (van Dongen, 2001). (2) Building Relationships, identical to the same domain defined by staff and experts in the measure development phase. (3) Care Challenges which combined items from the "Dealing with Challenges" domain defined in the measure development process with two items from the initial "Applying Helping Skills" domain. These items represented roles and responsibilities that were potentially challenging for untrained care staff, as they require specific training to be accomplished appropriately. In many services, allied health professionals, such as clinical psychologists, occupational therapists or specialist nurses, provide these activities, e.g. Cognitive Stimulation Therapy groups. However, considering the person-centered approach to care there is a strong argument for frontline staff to be competent in these skills. Finally (4) Sustaining Personhood reflected the initial domain "Providing for Individual Needs". In addition, it included one item from the "Applying Helping Skills" domain. These roles and responsibilities aim at sustaining the personhood of the person with dementia by respecting their individuality, e.g. meeting individual needs, offering choice, and protecting the individual's dignity.
Scale and subscale properties
The new SCIDS scale had a range of 51 points (17-68 points), and participant scores ranged from 32 to 68, covering two thirds of the scale. The Kolmogorov-Smirnov test indicated normal distribution (M = 55.63, SD = 7.48) for the overall scale. The subscales had ranges from 12 (Building Relationships, Care Challenges, and Sustaining Personhood) to 15 (Professionalism) points and were not normally distributed due to a significant skewedness toward the upper end.
Internal consistency
Cronbach's [alpha] indicated good internal consistency for the full scale ([alpha] = 0.91) and the subscales "Professionalism" ([alpha] = 0.82) and "Building Relationships" ([alpha] = 0.83). The subscales "Care Challenges" ([alpha] = 0.78) and "Sustaining Personhood" ([alpha] = 0.70) had acceptable internal consistency.
Test-retest reliability
The intra-class correlations indicated substantial test-retest reliability for the full scale (ICC(50) = 0.74, p < 0.000, SEM = 6.44) and the subscales "Professionalism" (ICC(57) = 0.70, p < 0.000, SEM = 2.26), "Care challenges" (ICC(54) = 0.63, p < 0.000, SEM = 2.10) and "Sustaining Personhood" (ICC(55) = 0.78, p < 0.000, SEM = 1.55) and moderate for the subscale "Building Relationships" (ICC(57) = .58, p = 0.001, SEM = 2.14).
Convergent validity
Participants' sense of competence in dementia care correlated positively with their amount of experience in working with people with dementia (Ï(195) = 0.24, p = 0.001).
No support was found for a positive association of sense of competence with dementia knowledge, as correlations of the DK-20 with the SCIDS (r (189) = -0.09, p = 0.23) and its sub-scales (Building Relationships: r (196) = 0.01, p = 0.93, Care Challenges: Ï(196) = -0.05, p = 0.52, Sustaining Personhood: Ï(195) = -0.06, p = 0.42, Professionalism: Ï(196) = -0.19, p = 0.009) were non-significant or opposed to the hypothesis.
Some support was found for a positive association of sense of competence with job satisfaction, as the Job Satisfaction Index scores correlated positively with the SCIDS scores (r(174) = 0.16, p = 0.018).
Finally, there was support for a positive association of sense of competence with person-centered approaches to dementia care, as the SCIDS correlated positively with the ADQ "Personhood" subscale (r(174) = 0.13, p = 0.04). This correlation increased when looking at the SCIDS "Sustaining Personhood" subscale in isolation (Ï(180) = 0.18, p = 0.006). However, correlations with the ADQ "Hope" were not significant.
Discussion
This study described the development and evaluation of the 17-item Sense of Competence in Dementia Care for Staff (SCIDS) questionnaire comprising four subscales (Professionalism, Building Relationships, Care Challenges, and Sustaining Personhood). Internal consistency and test-retest reliability for the full scale and its subscales were acceptable to good for the current sample. Furthermore, some evidence was presented for the scale's predictive and convergent validity.
One major challenge for the study was that previous research into sense of competence or similar constructs in dementia care staff was scarce. Previous research literature lacked a shared definition of the construct. For example, the Inventory of Geriatric Nursing Self-Efficacy (IGNS, Mackenzie and Peragine, 2003) and the Five Vignettes (Hughes et al., 2008) were specifically developed for care staff, but seemed to measure nurses' confidence in dealing with stressful care situations, rather than exploring competence across central care responsibilities. Another approach (Zimmerman et al., 2005) was to ask staff how well they felt trained in certain aspects of dementia care (e.g. behavioral symptoms). Their very broad questions might have reflected satisfaction with training, but not necessarily task specific sense of competence. Therefore, a clear definition of the construct had to be developed which does not necessarily match earlier approaches, making validity testing of the SCIDS challenging.
The current study revealed some evidence for content validity, as principal components analysis identified four components reflecting the domains developed by staff and experts as well as current developments in dementia care, such as person-centered care and cognitive stimulation. Interestingly, the component explaining most of the variance was Professionalism, a dimension specific to staff caregivers. This highlights the importance of care staff specific tools, and questions the adaptation of existing family caregiver questionnaires for use with staff. For example, Teri et al. (2005) used the Short Sense of Competence Questionnaire (Vernooij-Dassen et al., 1999) aimed at informal caregivers, with formal care staff. The measure was adjusted by dropping items deemed inappropriate for staff. However, sense of competence is likely to be experienced differently between formal and informal caregivers, as their relationship with the cared for person is different (Middleton et al., 1999). This measurement approach bears the risk that the measure will not cover all aspects of sense of competence in formal caregivers, and content validity might be jeopardized.
The next important component, Building Relationships, echoes the significance of person-centered care (Kitwood, 1997), which has become a basic prerequisite for good care over the last decade. Similarly, the component Sustaining Personhood reflects the growing importance of approaches supporting a continuous feeling of personhood and identity in the person with dementia, e.g. reminiscence groups. Finally, the component Care Challenges comprises roles and responsibilities that staff may have to deal with but may not have the appropriate training for. These roles and responsibilities have been at the center of recent developments in dementia care, generating alternative and complementary approaches to medication in addressing challenging behavior and cognitive decline. For example, Cognitive Stimulation Therapy (Spector et al., 2003) has been shown to improve different areas of cognitive functioning in a person with dementia.
Results regarding construct validity of the SCIDS were largely strengthening the "nomological network" of the new construct. However, whilst most associations for the SCIDS were highly significant they were only of a mild to moderate strength. Also, the negative association between the SCIDS subscale "Professionalism" and the dementia knowledge measure contradicted expectations. Furthermore, no association was found between the ADQ Hope scale and the SCIDS. As Streiner and Norman (2008) point out, interpreting ambiguous results for construct validity is complex, as there is often no clear indicator whether the results are due to weaknesses in the measure, the study, the theoretical assumptions, or all three aspects. Clearly, the SCIDS needs further investigation to make sense of these results.
Strengths and limitations
A strength of the SCIDS is its flexibility across various settings, including residential care and hospitals. In contrast to measures such as the IGNS (Mackenzie and Peragine, 2003) or the Five Vignettes (Hughes et al., 2008), SCIDS items are specific to certain roles and responsibilities rather than situation specific. This is particularly important, as the new financial and epidemiological challenges of dementia within the UK population (Department of Health, 2009) will require new approaches to meeting the need for care provision in a variety of settings. Furthermore, the SCIDS is also easy to use, comprising 17 short items, and participants agreed that the questions were easy to answer. This was reflected in low numbers of missing values in the large-scale study.
The current study has several limitations. First, the response rate varied across data recruitment sites. Despite this, an appropriate response rate of 67% was achieved. However, as no non-responder data were collected, it remains unknown if there were systematic differences. Secondly, participants did not use the full item scales of the SCIDS, but predominantly used the two top answer options, indicating a ceiling effect for the tool and its subscales and limiting variance within the data. Thirdly, social desirability is likely to have biased the results of this study. Thus, participants might have engaged in impression management to maintain a positive self-image (Goffman, 1959) when completing the questionnaire. They also might have tried to avoid cognitive dissonance (Festinger, 1957) between the fact that they work as a care worker and a possible lack of competence in their work. This effect might have been increased due to data having been collected at work or training places, the environment in which work-related self-esteem is most salient. As a result, participants might have under-reported feelings of a lack of competence. Finally, a confirmatory factor analysis to validate the four-factor structure is yet to be undertaken.
Implications for future practice and research
Training dementia care staff in order to improve quality of care and life of people with dementia has become a priority (Department of Health, 2009). However, doubts have been raised about the translation of training and better dementia knowledge into the work place, particularly when reinforcing and enabling factors are not present for staff (Kuske et al., 2007). With an increasing financial burden on public and private resources it is paramount to evaluate carefully what works best in improving dementia care. The SCIDS can help deconstruct and analyze effects of training on staff and how these relate to care behavior and, in turn, to the quality of life of the person with dementia. The measure might also have a potential for monitoring trainees' progress through a training program and for evaluating whether training programs affect the areas that are most needed in dementia care. For example, does a training program really help staff feel more competent in sustaining personhood, or does the trainees' increased knowledge about sustaining personhood lead to a more critical self-appraisal and thus reduce sense of competence? And, what else needs to be offered to close the gap between knowledge and sense of competence? Thus, the SCIDS is a prospective research tool exploring different aspects of sense of competence in dementia care staff and in training processes.
Further research is needed to explore the construct of sense of competence in dementia care staff and to accumulate information about construct validity and responsiveness to change. Future research might seek to explore environmental, personality-related or situation specific factors that could influence sense of competence in dementia care staff in addition to experience and training.
Conclusion
Overall, the SCIDS is promising and an attractive alternative to existing measures, as it covers an encompassing construct of sense of competence in dementia care staff with acceptable psychometric properties. The measure can be applied in its current format to accumulate further information regarding its reliability and validity. It is hoped that the SCIDS will open new opportunities to explore a field that has not been given much research attention in the past, but could have significant impact on the quality of care and life of people with dementia.
Conflict of interest
None.
Description of authors' roles
Astrid K. Schepers was the main researcher responsible for all parts of the study, from design to writing of the paper Niamh Shanahan was involved in study design, data collection, and write-up. Aimee Spector and Martin Orrell provided expert advice and practical support in study design and planning, data collection, data analysis, and write-up
Acknowledgments
We thank all participants for their help with this study. In particular, we thank all the care homes, care providers and their employees who invested time and effort in completing the questionnaires and assisting through all stages of the study.
References
1. A. Bandura (2006). Guide for constructing self-efficacy scales. In F. Pajares and T. Urdan (eds.), Self-Efficacy Beliefs of Adolescents (pp. 307-337). Greenwich, CT: Information Age Publishing.
2. H. Brodaty, A. Green and A. Koschera (2003). Meta-analysis of psychosocial interventions for caregivers of people with dementia. Journal of the American Geriatrics Society, 51, 657-664.
3. J. D. Brown (2009). Statistics corner: questions and answers about language testing statistics: choosing the right number of components or factors in PCA and EFA. Shiken: JALT Testing and Evaluation SIG Newsletter, 13, 19-23.
4. L. Clare, A. Baddeley, E. Moniz-Cook and B. Woods (2003). A quiet revolution. The Psychologist, 16, 250-254.
5. T. E. Davison, M. P. McCabe, S. Visser, C. Hudgson, G. Buchanan and K. George (2007). Controlled trial of dementia training with a peer support group for aged care staff. International Journal of Geriatric Psychiatry, 22, 868-873.
6. Department of Health (2009). Living Well with Dementia: A National Dementia Strategy. London: Department of Health.
7. L. Festinger (1957). A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press.
8. E. Finnema (2005). The effect of integrated emotion-oriented care versus usual care on elderly persons with dementia in the nursing home and on nursing assistants: a randomized clinical trial. International Journal of Geriatric Psychiatry, 20, 330-343.
9. J. Firth-Cozens and G. E. Hardy (1992). Occupational stress, clinical treatment and changes in job perceptions. Journal of Occupational and Organisational Psychology, 65, 81-88.
10. E. Goffman (1959). The Presentation of Self in Everyday Life. New York: Doubleday Anchor Books.
11. H. Hasson and J. E. Arnetz (2008). Nursing staff competence, work strain, stress and satisfaction in elderly care: a comparison of home-based care and nursing homes. Journal of Clinical Nursing, 17, 468-481.
12. J. Hughes, H. Bagley, S. Reilly, A. Burns and D. Challis (2008). Care staff working with people with dementia: training, knowledge and confidence. Dementia, 7, 227-238.
13. T. Kitwood (1997). Dementia Reconsidered: The Person Comes First. Buckingham: Open University Press.
14. B. Kuske, S. Hanns, T. Luck, M. C. Angermeyer, J. Behrens and S. G. Riedel-Heller (2007). Nursing home staff training in dementia care: a systematic review of evaluated programs. International Psychogeriatrics, 19, 818-841.
15. T. Lintern (2009). Improving Quality in Dementia Care: Relationships Between Care Staff Attitudes, Behaviour and Resident Quality of life. Saarbruecken: VDM Verlag Dr Mueller Aktiengesellschaft & Co. KG.
16. C. S. Mackenzie and G. Peragine (2003). Measuring and enhancing self-efficacy among professional caregivers of individuals with dementia. American Journal of Alzheimer's Disease & Other Dementias, 18, 291-299.
17. P. McCallion, M. McCarron and L. T. Force (2005). A measure of subjective burden for dementia care: the caregiving difficulty scale - intellectual disability. Journal of Intellectual Disability Research, 49, 365-371.
18. J. I. Middleton, N. J. Stewart and J. S. Richardson (1999). Caregiver distress related to disruptive behaviors on special care units versus traditional long-term care units. Journal of Gerontological Nursing, 25, 11-19.
19. E. Moniz-Cook (2008). A European consensus on outcome measures for psychosocial intervention research in dementia care. Aging and Mental Health, 12, 14-29.
20. D. Peterson, M. Berg-Weger, J. McGillick and L. Schwartz (2002). Basic care I: the effect of dementia-specific training on certified nursing assistants and other staff. American Journal of Alzheimer's Disease & Other Dementias, 17, 154-164.
21. Plain English Campaign (2001). How to Write in Plain English. Available at: http://www.plainenglish.co.uk/files/howto.pdf.
22. A. Schepers (2010). The Sense of Competence in Dementia Care Questionnaire for Staff (SOCID-S): Development, reliability, and validity. D. Clin. Psy. thesis (Volume 1), University College London.
23. N. Shanahan (2010). The development of a knowledge of dementia measure for residential care staff: from conceptual development to empirical evaluation. Unpublished PhD thesis, University College London.
24. A. Spector (2003). Efficacy of an evidence-based cognitive stimulation therapy programme for people with dementia. British Journal of Psychiatry, 183, 248-254.
25. D. L. Streiner and G. R. Norman (2008). Health Measurement Scales: A Practical Guide to Their Development and Use, 3rd edn. New York: Oxford University Press.
26. L. Teri, P. Huda, L. Gibbons, H. Young and J. van Leynseele (2005). STAR: a dementia-specific training program for staff in assisted living residences. The Gerontologist, 45, 686-693.
27. E van Dongen. (2001). It isn't something to yodel about, but it exists! Faeces, nurses, social relations and status within a mental hospital. Aging and Mental Health, 5, 205-215.
28. M. Vernooij-Dassen, A. Felling and J. Persoon (1997). Predictors of change and continuity in home care for dementia patients. International Journal of Geriatric Psychiatry, 12, 671-677.
29. M. J. F. J. Vernooij-Dassen, A. J. A. Felling, E. Brummelkamp, M. G. H. Dauzenberg, G. A. M. van den Bos and R. Grol (1999). Assessment of caregiver's competence in dealing with the burden of caregiving for a dementia patient: a Short Sense of Competence Questionnaire (SSCQ) suitable for clinical practice. Journal of the American Geriatrics Society, 47, 256-257.
30. M. Vernooij-Dassen, C. Lamers, J. Bor, A. Felling and R. Grol (2000). Prognostic factors of effectiveness of a support program for caregivers of dementia patients. International Journal of Aging and Human Development, 51, 159-274.
31. S. Zimmerman (2005). Attitudes, stress, and satisfaction of staff who care for residents with dementia. Gerontologist, 45, 96-105.
1
Research Department of Clinical, Educational and Health Psychology, University College London, London, UK
2
Department of Mental Health Sciences, University College London, London, UK
3
R&D, North East London Foundation Trust, Goodmayes Hospital, Goodmayes, Essex, UK
Copyright © International Psychogeriatric Association 2012