1. Introduction
For decades, literature has been studying the correlation between student satisfaction and the learning environment because the students’ opinion is one of the elements to be taken into account to identify situations that promote or hinder learning and determine the success or failure of the course of study [1]. The learning environment is considered to be the social and organizational atmosphere in which interactions and communications between members of a learning group take place [2]. Learning environment, educational climate, and educational environment are used as synonymous concepts in literature [3,4,5,6,7,8]. The educational environment influences student behavior and has a strong effect on their results, satisfaction, and success [4]. Therefore, identifying the elements operating in the educational environment of a given path of study and evaluating their perception by students enables them to be modified to improve the learning experience in relation to teaching objectives [7]. Nursing education consists of theory and practice [8], therefore the learning environment includes both the educational and clinical aspects. The educational environment, in the strict sense, is considered a space, a physical structure (often identified as a classroom), where students develop knowledge, skills, attitudes, and professional values through lectures and case-study discussions [9]. On the other hand, the clinical environment is identified as the area in which nursing students apply knowledge and skills, integrating theory and practice while caring for patients. Learning environments that satisfy students enable them to achieve better and more promising learning outcomes [10]. The elements that contribute to making an optimal learning environment are: pedagogical atmosphere, teaching, relationships with educators, clinical tutors, nursing staff, educational equipment, and a physical environment [11,12,13]. Over the years, various tools have been developed to assess nursing students’ perceptions of their clinical learning experience. In fact, two reviews have been published in the literature that examined the clinical environment assessment tools published up until 2016 [14,15]. In the first review [14], conducted on the PubMed, CINAHL, and PROQUEST databases, the tools used to assess the clinical learning environment were identified and were available up until 2014. The second review [15], conducted on two databases (PubMed and CINAHL), with the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) guideline 2010 [16,17], evaluated the measurement properties of clinical environment assessment tools published up until 2016.
A systematic review of the tools to evaluate the educational sphere, which seems to be a fundamental part of the learning environment in the clinical sphere, has not been found in the literature.
Therefore, this study aimed (1) to identify the currently available tools for measuring the learning environments, both clinical and educational, of nursing students and (2) to evaluate their measurement properties in order to provide solid evidence for researchers, educators, and clinical tutors to use in the selection of tools.
2. Methods
2.1. Methodology and Search Strategy
We conducted a systematic review to evaluate the psychometric properties of self-reported learning environment measuring tools in accordance with the 2018 COSMIN Guidelines. The research was conducted on the following databases: PubMed, CINAHL, APA PsycInfo, and ERIC, until 13 February 2023. The search phases were conducted according to the PRISMA statement [18]. The search strategy used the search filters suggested by Terwee and colleagues [19], in addition to the key elements of the construct of interest (construct, population, and type of tools), combining them with the Boolean operators AND and NOT. Appendix A gives an example of the search strategy used on PubMed. EndNote version 8.2 [20] was used to manage the systematic review process. Development studies of tools that evaluated the educational or clinical learning environment and validation studies of tools already developed were included. The included articles were written in English and published in academic and peer-reviewed journals. Studies that did not have as their main objective evaluating the tools’ measuring properties of the learning environment (e.g., cross-sectional studies that measured only the Cronbach α) were excluded. We also excluded discussion and review protocols because this literature provides only limited information. Furthermore, articles that did not publish the tool within the article were excluded because, according to the COSMIN Guidelines, this was necessary for the evaluation of the tool by reviewers. The review protocol was published in the PROSPERO register (CDR42023408271)
2.2. Data Synthesis and Quality Assessment Tool
COSMIN guidelines were adopted during the data synthesis process. These guidelines were initially developed to conduct systematic reviews of Patient-Reported Outcome Measures (PROMs). In recent times, these have been adapted to healthy individuals or caregiver-reported outcome measures [21]. In accordance with the guideline, two reviewers independently evaluated the content validity of each instrument in three steps. First, the quality of the development study was evaluated with COSMIN Box 1, which examines the relevance of the new tool’s items and the comprehensiveness and comprehensibility of the pilot study or the cognitive interview. Second, the quality of the validation studies was evaluated with COSMIN Box 2, divided into 5 sections (from 2a to 2e), which examine relevance, comprehensiveness, and comprehensibility. Here, the reviewer group can choose which sections to complete (e.g., if the professional has not been consulted in the content validity study, sections 2d and 2e can be skipped). Third, all the evidence from the development and validation studies is summarized, then the reviewers evaluate the tool, and finally an overall score is determined based on relevance, comprehensiveness, comprehensibility, and content validity (from sufficient to indeterminate). Finally, confidence in the trustworthiness of the overall ratings (high, moderate, low, or very low) is determined using the modified Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach. The quality of the evidence is considered high when one or more studies present very good psychometric and confident results. The quality is moderate when imprecision or inconsistency is observed. The quality is low or very low when the level of confidence is limited or very small. According to the COSMIN 2018 guidelines, a level A rating is assigned when there is evidence for sufficient content validity and low-quality evidence for sufficient internal consistency. Level B is assigned when the scale cannot be classified as level A or C. Level C is assigned when high-quality evidence for an insufficient measurement property is present.
Subsequently, two reviewers independently evaluated the psychometric properties of the tools in a three-step process. First, the methodological quality of each study was assessed with the COSMIN Risk of Bias checklist. Secondly, each measurement property was evaluated according to the criteria of the measurement properties. Third, the evidence for each instrument was summarized with a rating on its psychometric properties (from sufficient to indeterminate) and quality of evidence (high, moderate, low, very low) using the GRADE approach.
In accordance with COSMIN guidelines, at the end of these procedures, recommendations can be made on the use of instruments consisting of: level A- recommended for use; level B- potentially recommended but requiring further study; and level C- not recommended for use.
To carry out evaluations on the validity of the contents and the psychometric properties, the review team used the Excel file downloadable from the COSMIN website.
2.3. Data Extraction
During the evaluation process, two researchers extracted data from studies, including instrument title, author, year, and country of publication of the study; type of study (development or validity study); definition of the measured concept; sample characteristics; the number of items; response system; and psychometric properties investigated.
3. Results
3.1. Results of the Studies Included in the Review
A total of 45 articles (11 development studies and 34 validation studies) containing 14 measurement tools were included in the review (see Figure 1). One of the articles included [22] is both a validation study (for the CLES-T) and a development study (for the CALD). These studies were conducted on different continents: Africa (Morocco: 1 study), Asia (China: 3 studies; Turkey: 3 studies; Hong Kong: 1 study; Iran: 1 study; Japan: 1 study; and Nepal: 1 study), Europe (Italy: 5 studies; Finland: 4 studies; Spain: 3 studies; Norway: 2 studies; Greece: 2 studies; Croatia: 2 studies; Austria: 1 study; Belgium: 1 study; Sweden: 1 study; Germany: 2 studies; Slovenia: 1 study; and Portugal: 1 study), Oceania (Australia: 5 studies and New Zealand: 1 study), and America (USA: 2 studies). The instruments assessed the clinical traditional learning environment (9 instruments: CLE, SECEE, CLES, CLES-T, CALD, CLEQEI, CLEI, CLEI-19, and CLEDI), the clinical traditional and simulated environment (2 instruments: ESECS and CLECS), the clinical placement environment (CEF), and the educational learning environment (2 instruments: EAPAP and DREEM). The descriptions of the studies and the instruments with their psychometric properties are presented in Table 1.
Note: Reason 1: instruments not included in article; Reason 2: not validation studies (e.g., survey); Reason 3: studies evaluating only one psychometric property (e.g., Cronbach Alpha); (*) Notice that the CALD instrument development study also includes a validation of the CLES-T, so it should not be summarized together with the other validation studies.
3.2. Methodological Quality, Overall Rating, and GRADE Quality of Evidence
In the evaluation of the quality of the evidence, 9 instruments were rated Moderate (CALD, CLECS, CLEI, CLEI-19, CLES, CLES-T, DREEM, ESECS, and SECEE), 3 Low (CEF, CLEDI, and CLEQEI) and 2 Very Low (CLE and EAPAP). This was determined by the quality and quantity of the validation and development studies reviewed. However, as indicated by the COSMIN guideline, studies that scored low or very low were not excluded from further evaluation. In addition, in the determination of relevance, comprehensiveness, and comprehensibility and, consequently, content validity, some biases in the study design resulted in low scores (most doubtful). The most frequent sources of bias were in the instrument development procedures (qualitative methodology for identifying relevant items; doubtful presence of a trained moderator or interviewer; no interview guidelines included in the article; the doubtful process of recording and transcribing participants’ responses; doubtful independence of the data coding process; doubtful reaching of data saturation); and in the pilot tests (not at the requisite level of relevance, comprehensiveness, or comprehensibility of items to respondents; insufficient number of people enrolled in the pilot test or expert panel). See Table 2.
3.3. Psychometric Properties, Overall Rating, and GRADE Quality of the Evidence
The next stage of evaluation focused on the psychometric properties of the instruments tested in the articles included in the review. They scored 5 instruments as high quality (CEF, CLEI-19, CLEQEI, EAPAP, and SECEE), 2 as Moderate (CLE and CLEDI), 4 instruments as Low (CALD, CLECS, CLES, and CLES-T), and 3 as Very Low (CLEI, DREEM, and ESECS). These ratings were determined by the procedures used to test psychometric properties and were affected by some biases. For example, low scores were given for structural validity if the sample size in the analysis was not adequate. Based on the psychometric properties investigated in the studies and reported in Table 1, we were able to assess whether they met the criteria for good measurement properties reported in the COSMIN guidelines. Finally, based on the quality of the studies and the psychometric properties of the instruments, we allocated recommendations according to the modified GRADE method indicated by the COSMIN guidelines.
3.4. Learning Environment Instruments
All the instruments included in the review were developed and validated to measure the nature of the learning environment, whether clinical or educational. We present here a brief narrative overview of the instruments. For a complete overview of the instruments and the procedures adopted in their development and validation, see Table 1.
The first tool developed to assess the clinical learning environment is the Clinical Learning Environment (CLE) tool. This instrument was developed based on the theories of Orton (1981) [66], who conducted a survey of the learning environment in hospital wards and generated a scale consisting of 124 items. Dunn and Burnett, with a panel of 12 experienced clinical educators, considered only 55 items valid and then, through factor analysis, confirmed an instrument consisting of 23 items and 5 subscales: staff-student relationships, nurse-manager commitment, patient relationships, interpersonal relationships, and student satisfaction. Only one instrument development study that met the inclusion criteria was identified by the review, and it was rated as “inadequate” for methodological quality because it was affected by the expert panel’s doubtful description of assessment procedures and the absence of a pilot test on nursing students [24]. The GRADE recommendation grade was C because of inconsistent content validity, very low methodological quality of studies, and insufficient internal consistency (Cronbach’s alpha being less than 0.70 in some factors of PCA and CFA).
The Dundee Ready Education Environment Measure (DREEM) was developed by Roff in 1997 to assess the educational environment of health professional trainees [67]. It originates from the results of a grounded theory study and subsequent panel of nearly 100 health educators from around the world, with subsequent validation by over 1000 students in countries as diverse as Scotland, Argentina, Bangladesh, and Ethiopia, to measure and diagnose educational environments in the health professions. It has been used internationally in different contexts, mainly with medical students, but also with other health professionals. The instrument consists of 50 items and 5 subscales: perception of learning, perception of teachers, social self-perception, perception of atmosphere, and academic self-perception. Three validation studies were included in the review, all of which reported sufficient content validity, moderate qualitative evidence (+/M), and sufficient though low internal consistency of the instrument (+/L), achieving a level A recommendation [58,59,60].
The Student Evaluation of Clinical Education Environment (SECEE) evaluates the clinical learning environment and was developed and validated by Sand-Jecklin in 1998 [64]. This instrument is based on the theoretical framework of cognitive apprenticeship, which states that students apply conceptual knowledge tools in a real-world environment while being guided by experienced professionals. Versions of the SECEE have evolved over time. Currently, the latest version is SECEE version 3, consisting of 32 items and 3 subscales: instructor facilitation, preceptor facilitation, and learning opportunities. Two validation studies were included in the review [65,68], and based on these, a grade of recommendation A was given for high quality of evidence, high internal consistency of the instrument, and sufficient content validity of moderate quality.
The Clinical Learning Environment Inventory (CLEI), which assesses the clinical learning environment, was developed and validated by Chan in 2001 [32,33,34]. It has been evaluated in four published journal articles, including three development articles and one validation article [32,33,34,35]. The instrument was developed based on the literature review and by modifying the College and University Classroom Environment Inventory (CUCEI) by Fraser and colleagues [69] (Assessment of Classroom Psychological Environment; Perth, Australia: Curtin University of Technology). Nearly 10 years later, Newton and colleagues (2010) modified 10 items from the “Actual” CLEI version, replacing the word “clinical teacher” with “preceptors,” and conducted a PCA for the first time [33]. The instrument contains 35 items and 5 subscales (each containing 7 items): individualization, innovation, involvement, personalization, and task orientation. The instrument has two formats: the “Actual” form, which measures the current clinical environment, and the “Preferred” form, which measures the preferred clinical environment. The instrument is not recommended for use (GRADE level C) because: studies showed moderate qualitative evidence, the instrument has inconsistent content validity (±/M), the internal consistency of the instrument is insufficient, and the quality of evidence of psychometric properties assessed is very low (-/VL).
In 2002, Saarikoski and Leino-Kilpi developed the Clinical Learning Environment and Supervision Instrument (CLES) [37]. The instrument originates from the theories of Quinn (1995), Wilson-Barnett et al. (1995), and Moss and Rowles (1997). From a review of literature focused on clinical learning environments and the supervisory relationship [31,32], the authors categorized and summarized those items that could reflect the construct, and these were then tested in a pilot study. Subsequently, the number and type of items were changed and revised by a group of experienced clinical teachers [37]. The final version of the CLES scale consists of 27 items and 5 subscales: ward atmosphere, leadership style of the ward manager, premises of nursing care on the ward, premises of learning on the ward, and supervisory relationship. The CLES instrument has been translated and validated in several countries: Belgium [39], Cyprus [47], and Italy [13,38], and used in international comparative validation studies (Finland and the United Kingdom) [39]. Four articles were included in the review: one development review [37] and three validation reviews [13,38,39]. The recommendation grade of the instrument is B since it requires further study due to low but sufficient evidence of its internal consistency (+/L) and moderate and inconsistent content validity (±/M).
In 2006, Hosoda [29] developed the Clinical Learning Environment Diagnostic Inventory (CLEDI) based on Kolb’s 1984 theory of experiential learning, which emphasizes that the learning process occurs only after the student is able to integrate concrete emotional experiences with cognitive processes [70]. The CLEDI is an instrument that contains 35 items and has 5 subscales: affective CLE, perceptual CLE, symbolic CLE, behavioral CLE, and reflective CLE. Only Hosoda’s instrument development study was included in the review, but due to the lack of a pilot study assessing students’ face validity, comprehensiveness, and comprehensibility, it scored low and had inconsistent content validity, earning a grade C recommendation.
In 2008, Saarikoski and colleagues modified the original CLES by including a new subscale related to the role of the nurse teacher (NL or T) to emphasize and define the importance of the nurse teacher in the clinical setting. The new scale, titled Clinical Learning Environment, Supervision, and Nurse Teacher (CLES-T) Scale, was validated in the same year [40]. A total of 19 studies were included: 1 development review [40] and 18 validation studies [39,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59]. CLES-T also received a grade of B recommendation, needing further study. This is due to some less recent studies with some methodological and measurement property biases that contributed to degrees of low but sufficient evidence of internal consistency of the instrument (+/L) but moderate and inconsistent content validity (±/M).
In 2011, Salamonson and colleagues modified the CLEI, reducing the items from 35 to 19. The CLEI-19 is used to assess two generic domains common to clinical learning environments: clinical facilitator support of learning and satisfaction with clinical placement. In this review, we included two studies: one development study [34] and one validation study [35]. The instrument received a grade B recommendation, given the high quality of the evidence and sufficient assessment of the internal consistency of the instrument (+/H) and inconsistent content validity of moderate quality (±/M) due to the absence of pilot testing procedures and content and face validity by a panel expert.
In 2011, Porter and colleagues [23] developed an instrument to assess the support received by students during clinical internships with the overall goal of improving the quality of the students’ clinical experience. The Clinical Evaluation Form (CEF) consists of 21 items and 5 subscales: orientation, clinical educator/teacher, ward staff/preceptor and ward environment, final assessment/clinical hurdles, and university. Only the internal consistency of this instrument was assessed, receiving a score of sufficient and high quality. However, other important psychometric properties were not evaluated. In addition, the stage of item validation (e.g., whether it was undertaken by two researchers independently) and whether the items had been evaluated for relevance, comprehensiveness, and comprehensibility by nursing students were not clearly described. Therefore, the instrument was given a level B recommendation, requiring further study.
In 2014, Baptista and colleagues [62] developed an instrument to assess nursing students’ perceptions and satisfaction during simulated clinical experiences. The Escala de Satisfação com as Experiências Clínicas Simuladas (ESECS) was developed based on the results of a literature review and a phenomenological study describing students’ experiences in high-fidelity simulated practice using manikins. These studies resulted in a list of 17 items and 3 subscales: practical dimension, realism dimension, and cognitive dimension. Two studies were included in the review: one on development [62] and the other on validation [63]. The studies demonstrate moderate and sufficient content validity (+/M), but insufficient internal consistency with evidence quality rated as low, and therefore the instrument achieved a level B recommendation, needing further psychometric studies.
The Clinical Learning Environment Comparison Survey (CLECS) was developed by Leighton in 2015 [25] through a literature review, the results of which were evaluated and used by a panel of 12 academics with experience in simulation with manikins and clinical environments to generate the items and subscales. This instrument was used in two pilot studies to assess clarity. The final instrument consists of 27 items and 6 subscales: communication, nursing process, holism, critical thinking, self-efficacy, and teaching-learning dyad. Four studies were included in this review: one development [63] and three of validation [66,67,68]. The content validity of the instrument was inconsistent and moderate (±/M); this was due to the unclear description of procedures on students’ assessments of the comprehensiveness and comprehensibility of the instrument. However, the internal consistency of the instrument attained the level of sufficient, while the quality of the evidence was rated as low, and therefore the recommendation level of the instrument was B.
One of the studies on CLES-T documented the development of a new instrument, the Cultural and Linguistic Diversity (CALD) scale, that assesses the clinical learning environment. The theoretical framework for the development of the CALD originates from two systematic reviews conducted by Mikkonen and colleagues [22]. From the synthesis of data from the two reviews, following Thomas and Harden’s 3-step analysis process, 101 descriptive themes emerged that were compared with each item on the original CLES+T scale. Those that did not have corresponding items in the CLES+T scale were operationalized into measurable items to be used in the development of CALD. The final scale includes 21 items and 4 subscales: orientation into clinical placement, role of student, cultural diversity in the clinical learning environment, and linguistic diversity in the clinical learning environment. On the basis of methodological quality and results of psychometric properties, Mokkinen’s study was one of the best studies conducted, and therefore, even though only one instrument development study that met the inclusion criteria was included in the review, a level A recommendation was given.
The Clinical Learning Environment Quality Evaluation Index (CLEQEI) is an instrument developed in Italy by a group of researchers at the University of Udine in order to assess students’ perceived quality of clinical learning [36]. It is composed of 22 items investigating the quality of tutoring strategies, learning opportunity, safety and quality of care, self-learning, and the quality of the learning environment. It is the subject of one of the studies included in this review, which investigated several psychometric properties of the CLEQEI with good results, although the methodology for developing the instrument for assessing relevance, comprehensiveness, and comprehensibility was described unclearly and overly briefly. Only this one developmental study was included in the review, and the recommendation achieved was level B.
The Escala de Apoyo Académico en el Prácticum in Spanish (EAPAP) was developed by Arribas-Marìn in 2017 for the purpose of assessing students’ perceptions of academic support during internship [61]. The EAPAP consists of 23 items and 4 subscales: peer support, academic institution support, preceptor support, and clinical facilitator support. This study demonstrated inconsistent content validity with really low qualitative evidence (±/VL) but sufficient internal consistency with high methodological quality, and therefore, although there is only one study of the instrument development, it can be recommended at level B but needs further psychometric validation studies to be strongly recommended.
As highlighted in the results, these instruments are not all comparable with each other because, although they all assess the learning environment of nursing students, they focus on measuring specific aspects such as the traditional clinical learning environment (9 instruments: CLE, SECEE, CLES, CLES-T, CALD, CLEQEI, CLEI, CLEI-19, and CLEDI), the clinical traditional and simulated environment (2 instruments: ESECS and CLECS), the clinical placement environment (1 instrument: CEF), and the educational learning environment (2 instruments: EAPAP and DREEM).
To make the results of this review even more comprehensive, we conducted a qualitative analysis of the items belonging to all identified instruments to identify common and uncommon categories investigated by each instrument (see Table 3). Twenty-three categories were identified. Among the most common categories, “Quality of tutoring strategies” was explored by 11 instruments, followed by “Learning opportunities”, which was explored by 9 instruments including DREEM. “Quality of relationship with tutors”, “Quality of clinical learning environment”, and “Safety and quality of care” were each explored by 8 instruments. The most notable differences are found in the categories exploring “Self-efficacy in theoretical learning,” “Quality of relationship with tutors,” and “Quality of teaching strategies,” which are each explored by only two instruments: the DREEM and the EAPAP.
4. Discussion
In our systematic review, a total of 45 studies emerged that estimated the reliability and validity of 14 instruments in 22 different countries belonging to 5 continents. Most were conducted in Europe (24 studies). The first validation study was the CLE scale, and the last one was the CLEQEI in 2017 [36]. This indicates that this field of research spans more than 30 years, during which a tremendous amount of change has occurred in nursing programs, internship environments, and student profiles [71]. We can ideally divide the instruments based on their development into first- and second-generation instruments, in agreement with Mansutti and colleagues [15]. In fact, first-generation instruments such as CLE scales, CLEDI, CLES, CLES-T, DREEM, and the SECEE originated from major theories of learning established mainly in the 1980s and 1990s, while second-generation instruments, on the other hand, started from instruments previously established in clinical settings (such as CALD and CLEI-19) or from validation by expert panels of findings that emerged from literature reviews (see CLECS). Development and validation studies of second-generation instruments also appear to be better described in the procedures adopted, thus offering a better evaluation of evidence on methodological quality. In addition, in recent years, a trend has emerged to evaluate the validity and reliability of established instruments in different countries (e.g., the CLES-T), gather evidence on instrument validity, and compare data. The instruments that emerged consisted of from two (CLEI-19) to six (CLECS) factors or subscales and from 19 (CLEI-19) to 50 (DREEM) items.
Comparing results between different studies that used the same instruments was not always easy for several reasons. First, because the methodological quality adopted was heterogeneous. Second, because the validation studies were conducted at different times and some analyses may not have been known at the time or may have become obsolete over time. Other common problems encountered were that few studies estimated reliability. Although test-retest procedures should be easy to perform in an academic setting given the availability of students, the possibility that the duration and frequency of clinical rotations might have made it impossible to perform a second assessment for the same person should be considered. Internal consistency and structural validity were estimated for most of the instruments, but with methodological approaches of different quality, also compromising the quality of the results. Finally, convergent and criterion validity were assessed on a few occasions, especially in the first-generation instruments, due to the lack of available field knowledge and instruments that could be the gold standard for comparison.
Limitations
One of the limitations of this review may have been that it included only peer-reviewed studies in English and Italian. Therefore, this may have resulted in a potential publication selection bias because other instruments may have been developed and diffused as gray literature or in different languages. The evaluation of the studies was based on the 2018 COSMIN guidelines, and some criteria required for the “very good” or “adequate” rating may not have been considered by authors of older studies, and this may have influenced the final evaluation of the instruments. Finally, it was not possible to assess the responsiveness of the instruments, that is, the ability of an instrument to detect change in the measured construct over time (as required by the COSMIN procedure), due to the absence of longitudinal studies among those included.
5. Conclusions
Fourteen tools that assess the quality of learning environments, both clinical and educational, have gone through a validation process so far. First-generation instruments have been developed from different learning theories, while second-generation instruments have been developed from the first generation by mixing, revising, and integrating several already-validated instruments. Not all relevant psychometric properties have been evaluated for the instruments, and often the methodological approaches used are doubtful or inadequate. In addition, a lack of homogeneity in the procedures for both assessing instrument relevance, comprehensiveness, and comprehensibility and for assessing psychometric properties emerged, thus threatening the external validity of the instruments. Future research must complete the validation processes undertaken for newly developed instruments and those already developed, but using higher-quality methods and estimating all psychometric properties.
Conceptualization, M.L. and D.I.; methodology, M.L., L.G., D.I., G.P., G.C. and S.R.; software, G.P. and I.N.; validation, A.D.B., R.L., D.T, R.G. and G.C.; formal analysis, M.D.M., M.L., D.I., L.S. and I.N.; investigation, D.G. and G.P.; resources, G.R. and A.S.; data curation, R.G., A.S., L.S. and M.D.M.; writing—original draft preparation, M.L., D.I., A.D.B. and D.G.; writing—review and editing, M.L., R.G., A.S. and L.G.; visualization, I.N., G.C. and S.R.; supervision, R.G. and R.L.; project administration, G.R., R.L. and D.T. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
None.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. PRISMA 2020 flow diagram for new systematic reviews, which included searches of databases and registers only.
Studies included in the review and psychometric properties of the instruments evaluated.
| Tools | Author/ |
Sample | No.of Items/Subscale/Response System | Structural Validity | Internal Consistency | Other Psychometric Properties |
|---|---|---|---|---|---|---|
| CALD | Mikkonen et al., 2017 [ |
329 nursing students in 1st, 2nd, and 3rd-year courses | 21 items |
EFA, 5 factors solution, 68% variance explained |
Total 0.88 |
Cross-cultural Validity (forward and backward translation) |
| CEF | Porter et al., 2011 [ |
178 nursing students in 1st and 2nd-year courses | 21 items |
Content and face validity, a panel of 3 experts (relevance, comprehensiveness, and comprehensibility) |
Total 0.90 |
|
| CLE | Dunn and Burnett, 1995 [ |
340 nursing students in 1st, 2nd, and 3rd-year courses | 23 items |
PCA, orthogonal rotation, 4-factor solution, 34.6% explained variance |
Subscales 0.60–0.83 (PCA) |
|
| CLECS | Leighton, 2015 [ |
422 nursing students from 4 colleges | 27 items |
PCA, varimax rotation, 6 factors solution, 69.97% variance explained |
Total 0.94 |
Test-retest (recall period 2 week); r = 0.55, p < 0.05 (traditional environment); r = 0.58, p < 0.05 |
| CLECS | Gu et al., 2018 [ |
179 nursing students in 1st, 2nd, and 3rd-year courses |
27 items |
PCA, varimax rotation, 5 factors solution, 61.43% variance explained (traditional environment) and 4-factor solution, 60.11% variance explained (simulated environment) |
Total 0.75 |
Cross-cultural Validity (Forward-backward translation) |
| CLECS | Olaussen et al., 2020 [ |
122 nursing students in 1st, 2nd, and 3rd-year courses |
27 items of Simulated form the CLECS |
CFA, 6-factor solution |
Subscales 0.69–0.89 | Cross-cultural Validity (guideline WHO 2018) |
| CLECS | Riahi et al., 2022 [ |
118 nursing students in 1st, 2nd, and 3rd-year courses |
27 items of traditional form the CLECS |
CFA, 6-factor solution |
Total 0.94 |
Cross-cultural Validity (forward and backward translation) |
| CLEDI | Hosoda Y., 2006 [ |
312 nursing students |
21 items |
PCA, promax rotation, 5 factors solution, 52.45% variance explained |
Total 0.84 |
Test-retest r = 0.76, p <0.01 |
| CLEI | Chan, 2001 [ |
108 nursing students in a 2nd-year course (quantitative phase) |
Two forms: Actual CLEI and Preferred |
Subscales Actual form 0.73- 0.84 |
Hypotheses testing (convergent validity): Actual forms with Preferred Form (r = 0.39−0.47) | |
| CLEI | Newton et al., 2010 [ |
513 nursing students in 2nd and 3rd-year courses | Actual CLEI form |
PCA, varimax rotation, 6 factors solution, 51% variance explained | Subscales 0.50–0.88 | |
| CLEI-19 | Salamonson et al., 2011 [ |
231 nursing students in 1st, 2nd, and 3rd-year courses | 19 items |
PCA, varimax rotation, 2-factor solution, 63.37% variance explained | Total 0.93 |
Hypotheses testing (known-groups technique: work and non-working students) no-working students and clinical facilitator r = 0.037, p < 0.05; work students and satisfaction clinical placement, r = 0.038, p < 0.05 |
| CLEI-19 | Leone et al., 2022 [ |
1095 nursing students in 1st, 2nd, and 3rd-year courses | 19 items |
ESEM, 2-factor solution |
Total 0–90 (alpha) Subscale 0.85–0.86 (Alpha) |
|
| CLEQEI | Palese A. et al., 2017 [ |
9606 nursing students in 1st, 2nd, and 3rd-year courses | 22 items |
EFA, 5-factor solution, 57,9% variance explained |
Total 0.95 |
Reliability: ICC (0.866 consistency and 0.864 concordance) |
| CLES | Saarikoski and Leino-Kilpi, 2002 [ |
416 nursing students in 2nd and 3rd-year courses | 27 items |
EFA, 5-factor solution, 64% variance explained |
Subscales 0.73–0.94 | Hypothesis testing (convergent validity) of subscale CLES (correlation between “premises of nursing care” and “ward atmosphere”, r = 0.50 p < 0.005; between premises learning and premises nursing care, r = 0.46, p < 0.05) |
| CLES | Tomietto et al., 2009 [ |
117 nursing students in 2nd and 3rd-year courses | 27 items |
Total 0.96 |
Cross-cultural Validity (forward and backward translation) |
|
| CLES | De Witte et al., 2011 [ |
768 nursing students of 1st, 2nd, and 3rd-year courses | 27 items |
EFA, varimax rotation, 5-factor solution, 71,28% variance explained |
Total 0.97 |
Cross-cultural Validity (forward and backward translation) |
| CLES | Burrai et al., 2012 [ |
59 nursing students in 2nd-year courses | 27 items |
PCA, promax rotation, 5-factor solution, 76.9% variance explained |
Total 0.96 |
|
| CLES-T | Saarikoski et al., 2008 [ |
965 nursing students in 1st, 2nd, and 3rd-year courses |
34 items |
EFA, varimax rotation, 5-factor solution; 67% variance explained | Total 0.90 |
|
| CLES-T | Johansson et al., 2010 [ |
177 nursing students in 1st, 2nd, and 3rd-year courses |
34 items |
EFA, varimax rotation, 5-factor solutions; 60.2% variance explained | Total 0.95 |
Cross-cultural Validity (forward and backward translation) |
| CLES-T | Henriksen et al., 2012 [ |
407 nursing students in 1st, 2nd, and 3rd-year courses |
34 items |
PCA, varimax rotation, 5-factor solution; 64% variance explained | Total 0.95 |
Cross-cultural Validity (forward and backward translation) |
| CLES-T | Tomietto et al., 2012 [ |
855 nursing students in 1st, 2nd, and 3rd-year courses | 34 items |
EFA, oblimin rotation, 7-factor solution; 67.27% variance explained |
Total 0.95 |
Cross-cultural Validity (forward and backward translation) |
| CLES-T | Bergjan et al., 2013 [ |
178 nursing students in 1st, 2nd, and 3rd-year courses |
34 items |
EFA, oblimin rotation, 5-factor solution, 72.85% variance explained |
Subscales 0.82–0.96 | Cross-cultural Validity (forward and backward translation) |
| CLES-T | Watson et al., 2014 [ |
416 nursing students in 1st, 2nd, and 3rd- year courses |
34 items |
EFA, 4-factor solution, 58.28% variance explained |
Subscales 0.82–0.93 |
|
| CLES-T | Vizcaya-Moreno et al., 2015 [ |
370 nursing students of 1st, 2nd, and 3rd-year courses |
34 items |
EFA 5-factor solution, 66.4% variance explained |
Total 0.95 |
Cross-cultural Validity (modify direct translation method) |
| CLES-T | Papastavrou et al., 2016 [ |
463 nursing students of 1st, 2nd, and 3rd-year courses |
34 items |
EFA, varimax rotation, 5-factor solution, 67.4% variance explained |
Total 0.95 |
Cross-cultural Validity (forward and backward translation) |
| CLES-T | Nepal et al., 2016 [ |
263 nursing students in 1st, 2nd, and 4th-year courses |
34 items |
EFA 5-factor solution, 85.7% variance explained |
Total 0.93 |
|
| CLES-T | Lovric et al., 2016 [ |
136 nursing students in 1st, 2nd, and 3rd-year courses |
34 items |
EFA 4-factor solution, 71.5% variance explained |
Total 0.97 |
Cross-cultural Validity (forward and backward translation) |
| CLES-T | Mikkonen et al., 2017 [ |
329 nursing students in 1st, 2nd, and 3rd- year courses | 34 items |
EFA, 8-factor solution, 78% variance explained | Total 0.88 |
Hypothesis testing (convergent validity) with CLES-T (positive correlation between factor 1 CLES-T and Factor 3 CALD r = 0.62 p < 0.01; positive correlation between Factor 2 CLES-T and Factor 4 CALD, r = 0.64 p < 0.01) |
| CLES-T | Iyigun et al., 2018 [ |
190 nursing students in 3rd and 4th year courses | 34 items |
PCA, promax, 5-factor solution, 62% variance explained |
Subscales 0.76–0.93 |
Cross-cultural Validity (forward and backward translation) |
| CLES-T | Atay et al., 2018 [ |
602 nursing students in 1st, 2nd, and 3rd-year courses | 34 items |
EFA, 6-factor solution, 64% variance explained |
Total 0.95 |
Cross-cultural Validity (forward and backward translation) |
| CLES-T | Zvanut et al., 2018 [ |
232 nursing students in 1st, 2nd, 3rd, and 5th-year courses | 34 items |
PCA, varimax rotation, 5-factor solution, 67.69% variance explained |
Total 0.96 |
Cross-cultural Validity (forward and backward translation) |
| CLES-T | Mueller et al., 2018 [ |
385 nursing students in 1st, 2nd, and 3rd-year courses | 34 items |
PCA, promax rotation, 4-factor solution, 73.3% variance explained | Total 0.95 |
|
| CLES-T | Wong and Bressington, 2021 [ |
385 nursing students in 1st, 2nd, and 3rd-year courses | 34 items |
EFA, oblique rotation, 6-factor solution |
Total 0.94 |
Test-Retest (recall period 2 weeks), ICC 0.85%, 95% CI |
| CLES-T | Zhao et al., 2021 [ |
694 nursing students in 1st, 2nd, and 3rd-year courses | 27 items |
PCA, oblimin rotation, 3-factor solution, 60.01% variance explained |
Total 0.82 |
|
| CLES-T | Ozbicakci et al., 2022 [ |
135 junior and senior nursing students | 34 items |
CFA, 5-factor solution |
Total 0.86 |
|
| CLES-T | Guejdad et al., 2022 [ |
1550 nursing students in 1st, 2nd, and 3rd-year courses | 34 items |
EFA, promax rotation, 5-factor solution, 55% variance explained |
Total 0.93 |
Cross-cultural Validity (forward and backward translation) |
| DREEM | Wang et al., 2009 [ |
214 nursing students in 1st, 2nd, and 3rd-year courses |
50 items |
PCA, oblimin, 5-factor solution, 52.19% variance explained |
Total 0.95 |
Cross-cultural Validity (forward and backward translation) |
| DREEM | Rotthoff et al., 2011 [ |
1119 nursing students in 1st, 2nd, and 3rd-year courses |
50 items |
EFA, orthogonal rotation, 5-factor solution, 41.3% variance explained |
Total 0.92 |
Cross-cultural Validity (forward and backward translation) |
| DREEM | Gosak et al., 2021 [ |
174 nursing students in 1st, 2nd, and 3rd-year courses |
50 items |
Content validity, a panel of 6 experts, CVI 1.0 except for item n. 20 | Total 0.95 | Cross-cultural Validity (reverse translation technique) |
| EAPAP | Arribas-Marìn et al., 2017 [ |
710 nursing students in 2nd-year courses | 23 items |
PCA, promax rotation, 4 factors solution, 74.77% variance explained |
Total 0.92 |
|
| ESECS | Baptista et al., 2014 [ |
181 nursing students in 4th and 5th-year courses | 17 items |
PCA, orthogonal varimax rotation, 3-factor solution (practical dimension, realism dimension, and cognitive dimension) | Total 0.91 |
|
| ESECS | Montejano Lozoya et al., 2019 [ |
174 student nurses in 2nd, 3rd, and 4th-year courses | 17 items |
PCA, varimax rotation, 4-factor solution, 66.6% variance explained |
Total 0.91 |
|
| SECEE | Sand-Jeclklin, 2009 [ |
2768 inventories of nursing sophomore, junior, and baccalaureate students | 32 items |
EFA, 4-factor solution |
Total 0.94 |
Hypothesis testing according to student level (sophomore, junior, and senior) p = 0.05 seniors value more positively than sophomores |
| SECEE | Govina et al., 2016 [ |
130 senior nursing students | 32 items |
CFA, 3-factor solution |
Total 0.92 |
Cross-cultural Validity (backward forward translation) |
Note: PCA—principal component factor analysis; * same study sample; CALD—Cultural and Linguistic Diversity scale; CEF—Clinical Evaluation Form; CLE—Clinical Learning Environment scale; CLECS—Clinical Learning Environment Comparison Survey; CLEDI—Clinical Learning Environment Diagnostic Inventory; CLEI—Clinical Learning Environment Inventory; CLEI-19—Clinical Learning Environment Inventory 19 items; CLEQEI—Clinical Learning Environment Quality Evaluation Index; CLES—Clinical Learning Environment and Supervision Instrument; CLES-T—Clinical Learning Environment, Supervision, and Nurse Teacher; DREEM—Dundee Ready Education Environment Measure; EAPAP—Escala de Apoyo Académico en el Prácticum in Spanish; ESECS—Escala de Satisfação com as Experiências Clínicas Simuladas; SECEE—Student Evaluation of Clinical Education Environment.
Evaluation of content validity and psychometric properties and development of recommendations for the development of the instruments.
| Tool | Relevance | Comprehensiveness | Comprehensibility | Overall Content Validity | Structural Validity | Internal Consistency | Other Measurement | Recommendation |
|---|---|---|---|---|---|---|---|---|
| CALD | +/M | +/M | +/M | +/M | −/L | +/L | Hypothesis testing +/L |
A |
| CEF | +/L | ±/L | ±/L | ±/L | +/H | B | ||
| CLE | +/VL | ±/VL | ±/VL | ±/VL | −/M | −/M | C | |
| CLECS | +/M | ±/M | ±/M | ±/M | −/L | +/L | Cross-cultural validity +/L |
B |
| CLEDI | +/L | ±/L | ±/L | ±/L | ?/M | +/M | Criterion validity +/M |
B |
| CLEI | +/M | ±/M | ±/M | ±/M | ?/VL | −/VL | Hypothesis testing +/VL | C |
| CLEI-19 | +/M | ±/M | ±/M | ±/M | +/H | +/H | Hypothesis testing +/H | B |
| CLEQEI | +/L | ±/L | ±/L | ±/L | +/H | +/H | Reliability +/H |
B |
| CLES | ±/M | ±/M | ±/M | ±/M | ?/L | +/L | Cross-cultural testing +/L |
B |
| CLES-T | ±/M | ±/M | ±/M | ±/M | −/L | +/L | Reliability −/VL |
B |
| DREEM | +/M | +/M | +/M | +/M | −/L | +/L | Hypothesis testing +/L |
A |
| EAPAP | +/VL | ±/VL | ±/VL | ±/VL | +/H | +/H | B | |
| ESECS | +/M | +/M | +/M | +/M | −/VL | −/VL | B | |
| SECEE | +/M | +/M | +/M | +/M | ?/H | +/H | Cross-cultural validity +/H |
A |
Note: +—sufficient; -—insufficient; ±—inconsistent; ?—indeterminate; H—High; M—Moderate; L—Low; VL—Very low; A—sufficient content validity (any level) and at least low quality evidence for sufficient internal consistency; B—non A and non C; C—high quality evidence for an insufficient measurement property; CALD—Cultural and Linguistic Diversity scale; CEF—Clinical Evaluation Form; CLE—Clinical Learning Environment scale; CLECS—Clinical Learning Environment Comparison Survey; CLEDI—Clinical Learning Environment Diagnostic Inventory; CLEI—Clinical Learning Environment Inventory; CLEI-19—Clinical Learning Environment Inventory 19 items; CLEQEI—Clinical Learning Environment Quality Evaluation Index; CLES—Clinical Learning Environment and Supervision Instrument; CLES-T—Clinical Learning Environment, Supervision, and Nurse Teacher; DREEM—Dundee Ready Education Environment Measure; EAPAP—Escala de Apoyo Académico en el Prácticum in Spanish; ESECS—Escala de Satisfação com as Experiências Clínicas Simuladas; SECEE—Student Evaluation of Clinical Education Environment.
Categories associated with instruments.
| Categories | Tools | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| CALD | CEF | CLE | CLECS | CLEDI | CLEI | CLEI-19 | CLEQEI | CLES | CLES-T | DREEM | EAPAP | ESECS | SECEE | F | |
| Learning the nursing process | X | 1 | |||||||||||||
| Self-learning | X | X | 2 | ||||||||||||
| Self-efficacy in practical learning | X | X | 2 | ||||||||||||
| Self-efficacy in theoretical learning | X | X | 2 | ||||||||||||
| Students’ motivation | X | X | X | X | X | 5 | |||||||||
| Learning opportunities | X | X | X | X | X | X | X | X | X | 9 | |||||
| Learning barriers | X | X | X | X | 4 | ||||||||||
| Quality of relationship with teachers | X | X | 2 | ||||||||||||
| Quality of relationship with tutors | X | X | X | X | X | X | X | X | 8 | ||||||
| Quality of the clinical learning environment | X | X | X | X | X | X | X | X | 8 | ||||||
| Quality of the classroom learning environment | X | 1 | |||||||||||||
| Quality of the teaching strategies | X | X | 2 | ||||||||||||
| Quality of the tutoring strategies | X | X | X | X | X | X | X | X | X | X | X | 11 | |||
| Quality of relationship with Staff nurse | X | X | X | X | X | 5 | |||||||||
| Quality of relationship with patients and relatives | X | 1 | |||||||||||||
| Safety and quality of care | X | X | X | X | X | X | X | X | 8 | ||||||
| Satisfaction with the practical training experience | X | X | X | X | X | X | X | 7 | |||||||
| Satisfaction with theoretical learning | X | X | 2 | ||||||||||||
| Academic support (access to resources) | X | X | X | X | X | 5 | |||||||||
| Academic support (information received) | X | X | 2 | ||||||||||||
| Academic support (student support) | X | 1 | |||||||||||||
| Support from the staff nurse | X | X | X | X | 4 | ||||||||||
| Support from fellow students | X | X | X | 3 | |||||||||||
Note: CALD—Cultural and Linguistic Diversity scale; CEF—Clinical Evaluation Form; CLE—Clinical Learning Environment scale; CLECS—Clinical Learning Environment Comparison Survey; CLEDI—Clinical Learning Environment Diagnostic Inventory; CLEI—Clinical Learning Environment Inventory; CLEI-19—Clinical Learning Environment Inventory 19 items; CLEQEI—Clinical Learning Environment Quality Evaluation Index; CLES—Clinical Learning Environment and Supervision Instrument; CLES-T—Clinical Learning Environment, Supervision, and Nurse Teacher; DREEM—Dundee Ready Education Environment Measure; EAPAP—Escala de Apoyo Académico en el Prácticum in Spanish; ESECS—Escala de Satisfação com as Experiências Clínicas Simuladas; SECEE—Student Evaluation of Clinical Education Environment; F—frequency of appearance of the category on scales.
Appendix A
Multimedia Appendix 1: Searching filter of PubMed
|
|
|
|
References
1. Lizzio, A.; Wilson, K.; Simons, R. University Students’ Perceptions of the Learning Environment and Academic Outcomes: Implications for Theory and Practice. Stud. High. Educ.; 2002; 27, pp. 27-52. [DOI: https://dx.doi.org/10.1080/03075070120099359]
2. Letizia, M.; Jennrich, J. A review of preceptorship in undergraduate nursing education: Implications for staff development. J. Contin. Educ. Nurs.; 1998; 29, pp. 211-216. [DOI: https://dx.doi.org/10.3928/0022-0124-19980901-06]
3. Till, H. Climate studies: Can students’ perceptions of the ideal educational environment be of use for institutional planning and resource utilization? Med. Teach.; 2005; 27, pp. 332-337. [DOI: https://dx.doi.org/10.1080/01421590400029723]
4. Roff, S. The Dundee Ready Educational Environment Measure (DREEM)—A generic instrument for measuring students’ perceptions of undergraduate health professions curricula. Med. Teach.; 2005; 27, pp. 322-325. [DOI: https://dx.doi.org/10.1080/01421590500151054]
5. Al-Hazimi, A.; Zaini, R.; Al-Hyiani, A.; Hassan, N.; Gunaid, A.; Ponnamperuma, G.; Karunathilake, I.; Roff, S.; McAleer, S.; Davis, M. Educational environment in traditional and innovative medical schools: A study in four undergraduate medical schools. Educ. Health; 2004; 17, pp. 192-203. [DOI: https://dx.doi.org/10.1080/13576280410001711003] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15763762]
6. Pimparyon, P.; Roff, S.; McAleer, S.; Poonchai, B.; Pemba, S. Educational environment, student approaches to learning and academic achievement in a Thai nursing school. Med. Teach.; 2000; 22, pp. 359-364. [DOI: https://dx.doi.org/10.1080/014215900409456]
7. Roff, S.; McAleer, S.; Ifere, O.S.; Bhattacharya, S. A global diagnostic tool for measuring educational environment: Comparing Nigeria and Nepal. Med. Teach.; 2001; 23, pp. 378-382. [DOI: https://dx.doi.org/10.1080/01421590120043080]
8. Arkan, B.; Ordin, Y.; Yılmaz, D. Undergraduate nursing students’ experience related to their clinical learning environment and factors affecting to their clinical learning process. Nurse Educ. Pract.; 2018; 29, pp. 127-132. [DOI: https://dx.doi.org/10.1016/j.nepr.2017.12.005] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29274524]
9. Fego, M.W.; Olani, A.; Tesfaye, T. Nursing students’ perception towards educational environment in governmental Universities of Southwest Ethiopia: A qualitative study. PloS ONE; 2022; 17, e0263169. [DOI: https://dx.doi.org/10.1371/journal.pone.0263169]
10. Irfan, F.; Faris, E.A.; Maflehi, N.A.; Karim, S.I.; Ponnamperuma, G.; Saad, H.; Ahmed, A. The learning environment of four undergraduate health professional schools: Lessons learned. Pakistan J. Med. Sci.; 2019; 35, pp. 598-604. [DOI: https://dx.doi.org/10.12669/pjms.35.3.712]
11. Bhurtun, H.D.; Azimirad, M.; Saaranen, T.; Turunen, H. Stress and Coping Among Nursing Students During Clinical Training: An Integrative Review. J. Nurs. Educ.; 2019; 58, pp. 266-272. [DOI: https://dx.doi.org/10.3928/01484834-20190422-04] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31039260]
12. Aghamolaei, T.; Shirazi, M.; Dadgaran, I.; Shahsavari, H.; Ghanbarnejad, A. Health students’ expectations of the ideal educational environment: A qualitative research. J. Adv. Med. Educ. Prof.; 2014; 2, pp. 151-157. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25512939]
13. Burrai, F.; Cenerelli, D.; Sebastiani, S.; Arcoleo, F. Analisi di affidabilità ed esplorazione fattoriale del questionario Clinical Learning Enviroment of Supervision (CLES). Scenario; 2012; 29, pp. 41-47.
14. Hooven, K. Evaluation of instruments developed to measure the clinical learning environment: An integrative review. Nurse Educ.; 2014; 39, pp. 316-320. [DOI: https://dx.doi.org/10.1097/NNE.0000000000000076]
15. Mansutti, I.; Saiani, L.; Grassetti, L.; Palese, A. Instruments evaluating the quality of the clinical learning environment in nursing education: A systematic review of psychometric properties. Int. J. Nurs Stud.; 2017; 68, pp. 60-72. [DOI: https://dx.doi.org/10.1016/j.ijnurstu.2017.01.001]
16. Mokkink, L.B.; de Vet, H.C.W.; Prinsen, C.; Patrick, D.L.; Alonso, J.; Bouter, L.M.; Terwee, C.B. COSMIN Risk of Bias checklist for systematic reviews of Patient-Reported Outcome Measures. Qual. Life Res. Int J. Qual. Life Asp. Treat. Care Rehabil.; 2018; 27, pp. 1171-1179. [DOI: https://dx.doi.org/10.1007/s11136-017-1765-4]
17. Prinsen, C.; Mokkink, L.B.; Bouter, L.M.; Alonso, J.; Patrick, D.L.; de Vet, H.C.W.; Terwee, C.B. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual. Life Res. Int J. Qual. Life Asp. Treat. Care Rehabil.; 2018; 27, pp. 1147-1157. [DOI: https://dx.doi.org/10.1007/s11136-018-1798-3]
18. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Moher, D. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg.; 2021; 88, 105906. [DOI: https://dx.doi.org/10.1016/j.ijsu.2021.105906]
19. Terwee, C.B.; Jansma, E.P.; Riphagen, I.I.; de Vet, H.C.W. Development of a methodological PubMed search filter for finding studies on measurement properties of measurement instruments. Qual. Life Res. Int J. Qual. Life Asp. Treat. Care Rehabil.; 2009; 18, pp. 1115-1123. [DOI: https://dx.doi.org/10.1007/s11136-009-9528-5]
20. Thomson, R. EndNote®. 2020; Available online: www.endnote.com (accessed on 4 April 2023).
21. Chang, Y.C.; Chang, H.Y.; Feng, J.Y. Appraisal and evaluation of the instruments measuring the nursing work environment: A systematic review. J. Nurs. Manag.; 2022; 30, pp. 670-683. [DOI: https://dx.doi.org/10.1111/jonm.13559] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35146825]
22. Mikkonen, K.; Elo, S.; Miettunen, J.; Saarikoski, M.; Kääriäinen, M. Development and testing of the CALDs and CLES+T scales for international nursing students’ clinical learning environments. J. Adv. Nurs.; 2017; 73, pp. 1997-2011. [DOI: https://dx.doi.org/10.1111/jan.13268] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28152229]
23. Porter, J.; Al-Motlaq, M.; Hutchinson, C.; Sellick, K.; Burns, V.; James, A. Development of an undergraduate nursing Clinical Evaluation Form (CEF). Nurse Educ. Today; 2011; 31, pp. e58-e62. [DOI: https://dx.doi.org/10.1016/j.nedt.2010.12.016] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21295894]
24. Dunn, S.V.; Burnett, P. The development of a clinical learning environment scale. J. Adv. Nurs.; 1995; 22, pp. 1166-1173. [DOI: https://dx.doi.org/10.1111/j.1365-2648.1995.tb03119.x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/8675872]
25. Leighton, K. Development of the Clinical Learning Environment Comparison Survey. Clin. Simul. Nurs.; 2015; 11, pp. 44-51. [DOI: https://dx.doi.org/10.1016/j.ecns.2014.11.002]
26. Gu, Y.H.; Xiong, L.; Bai, J.B.; Hu, J.; Tan, X.D. Chinese version of the clinical learning environment comparison survey: Assessment of reliability and validity. Nurse Educ. Today; 2018; 71, pp. 121-128. [DOI: https://dx.doi.org/10.1016/j.nedt.2018.09.026]
27. Olaussen, C.; Jelsness-Jørgensen, L.P.; Tvedt, C.R.; Hofoss, D.; Aase, I.; Steindal, S.A. Psychometric properties of the Norwegian version of the clinical learning environment comparison survey. Nurs. Open.; 2021; 8, pp. 1254-1261. [DOI: https://dx.doi.org/10.1002/nop2.742]
28. Riahi, S.; Abolfazlie, M.; Arabi, M. Psychometric Properties of Clinical Learning Environment Comparison Survey Questionnaire in Nursing Students. J. Adv. Med. Educ Prof.; 2022; 10, pp. 267-273.
29. Hosoda, Y. Development and testing of a Clinical Learning Environment Diagnostic Inventory for baccalaureate nursing students. J. Adv. Nurs.; 2006; 56, pp. 480-490. [DOI: https://dx.doi.org/10.1111/j.1365-2648.2006.04048.x]
30. Chan, D. Development of an innovative tool to assess hospital learning environments. Nurse Educ. Today; 2001; 21, pp. 624-631. [DOI: https://dx.doi.org/10.1054/nedt.2001.0595]
31. Chan, D.S. Combining qualitative and quantitative methods in assessing hospital learning environments. Int J. Nurs. Stud.; 2001; 38, pp. 447-459. [DOI: https://dx.doi.org/10.1016/S0020-7489(00)00082-1] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/11470103]
32. Chan, D. Development of the Clinical Learning Environment Inventory: Using the theoretical framework of learning environment studies to assess nursing students’ perceptions of the hospital as a learning environment. J. Nurs. Educ.; 2002; 41, pp. 69-75. [DOI: https://dx.doi.org/10.3928/0148-4834-20020201-06]
33. Newton, J.M.; Jolly, B.C.; Ockerby, C.M.; Cross, W.M. Clinical learning environment inventory: Factor analysis. J. Adv. Nurs.; 2010; 66, pp. 1371-1381. [DOI: https://dx.doi.org/10.1111/j.1365-2648.2010.05303.x]
34. Salamonson, Y.; Bourgeois, S.; Everett, B.; Weaver, R.; Peters, K.; Jackson, D. Psychometric testing of the abbreviated Clinical Learning Environment Inventory (CLEI-19). J. Adv. Nurs.; 2011; 67, pp. 2668-2676. [DOI: https://dx.doi.org/10.1111/j.1365-2648.2011.05704.x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21722165]
35. Leone, M.; Maria, M.D.; Alberio, M.; Colombo, N.T.; Ongaro, C.; Sala, M.; Luciani, M.; Ausili, D.; Di Mauro, S. Proprietà psicometriche della scala CLEI-19 nella valutazione dell’apprendimento clinico degli studenti infermieri: Studio osservazionale multicentrico. Prof. Inferm.; 2022; 2, pp. 86-92.
36. Palese, A.; Grassetti, L.; Mansutti, I.; Destrebecq, A.; Terzoni, S.; Altini, P.; Bevilacqua, A.; Brugnolli, A.; Benaglio, C.; Ponte, A.D. et al. The Italian instrument evaluating the nursing students clinical learning quality. Assist. Inferm. Ric. AIR; 2017; 36, pp. 41-50.
37. Saarikoski, M.; Leino-Kilpi, H. The clinical learning environment and supervision by staff nurses: Developing the instrument. Int. J. Nurs. Stud.; 2002; 39, pp. 259-267. [DOI: https://dx.doi.org/10.1016/S0020-7489(01)00031-1]
38. Tomietto, M.; Saiani, L.; Saarikoski, M.; Fabris, S.; Cunico, L.; Campagna, V.; Palese, A. Assessing quality in clinical educational setting: Italian validation of the clinical learning environment and supervision (CLES) scale. G. Ital. Med. Lav. Ergon.; 2009; 31, (Suppl. 3), pp. B49-B55.
39. De Witte, N.; Labeau, S.; De Keyzer, W. The clinical learning environment and supervision instrument (CLES): Validity and reliability of the Dutch version (CLES+NL). Int. J. Nurs. Stud.; 2011; 48, pp. 568-572. [DOI: https://dx.doi.org/10.1016/j.ijnurstu.2010.09.009]
40. Saarikoski, M.; Isoaho, H.; Warne, T.; Leino-Kilpi, H. The nurse teacher in clinical practice: Developing the new sub-dimension to the Clinical Learning Environment and Supervision (CLES) Scale. Int. J. Nurs. Stud.; 2008; 45, pp. 1233-1237. [DOI: https://dx.doi.org/10.1016/j.ijnurstu.2007.07.009] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17803996]
41. Johansson, U.B.; Kaila, P.; Ahlner-Elmqvist, M.; Leksell, J.; Isoaho, H.; Saarikoski, M. Clinical learning environment, supervision and nurse teacher evaluation scale: Psychometric evaluation of the Swedish version. J. Adv. Nurs.; 2010; 66, pp. 2085-2093. [DOI: https://dx.doi.org/10.1111/j.1365-2648.2010.05370.x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/20626485]
42. Henriksen, N.; Normann, H.K.; Skaalvik, M.W. Development and testing of the Norwegian version of the Clinical Learning Environment, Supervision and Nurse Teacher (CLES+T) evaluation scale. Int. J. Nurs. Educ. Scholarsh.; 2012; 9. [DOI: https://dx.doi.org/10.1515/1548-923X.2239] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22992287]
43. Tomietto, M.; Saiani, L.; Palese, A.; Cunico, L.; Cicolini, G.; Watson, P.; Saarikoski, M. Clinical learning environment and supervision plus nurse teacher (CLES+T) scale: Testing the psychometric characteristics of the Italian version. G. Ital. Med. Lav. Ergon.; 2012; 34, (Suppl. 2), pp. B72-B80.
44. Bergjan, M.; Hertel, F. Evaluating students’ perception of their clinical placements—Testing the clinical learning environment and supervision and nurse teacher scale (CLES + T scale) in Germany. Nurse Educ. Today; 2013; 33, pp. 1393-1398. [DOI: https://dx.doi.org/10.1016/j.nedt.2012.11.002] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23200088]
45. Watson, P.B.; Seaton, P.; Sims, D.; Jamieson, I.; Mountier, J.; Whittle, R.; Saarikoski, M. Exploratory factor analysis of the Clinical Learning Environment, Supervision and Nurse Teacher Scale (CLES+T). J. Nurs. Meas.; 2014; 22, pp. 164-180. [DOI: https://dx.doi.org/10.1891/1061-3749.22.1.164] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24851671]
46. Vizcaya-Moreno, M.F.; Pérez-Cañaveras, R.M.; De Juan, J.; Saarikoski, M. Development and psychometric testing of the Clinical Learning Environment, Supervision and Nurse Teacher evaluation scale (CLES+T): The Spanish version. Int. J. Nurs. Stud.; 2015; 52, pp. 361-367. [DOI: https://dx.doi.org/10.1016/j.ijnurstu.2014.08.008]
47. Papastavrou, E.; Dimitriadou, M.; Tsangari, H.; Andreou, C. Nursing students’ satisfaction of the clinical learning environment: A research study. BMC Nurs.; 2016; 15, 44. [DOI: https://dx.doi.org/10.1186/s12912-016-0164-4]
48. Nepal, B.; Taketomi, K.; Ito, Y.M.; Kohanawa, M.; Kawabata, H.; Tanaka, M.; Otaki, J. Nepalese undergraduate nursing students’ perceptions of the clinical learning environment, supervision and nurse teachers: A questionnaire survey. Nurse Educ. Today; 2016; 39, pp. 181-188. [DOI: https://dx.doi.org/10.1016/j.nedt.2016.01.006]
49. Lovrić, R.; Piškorjanac, S.; Pekić, V.; Vujanić, J.; Ratković, K.K.; Luketić, S.; Plužarić, J.; Matijašić-Bodalec, D.; Barać, I.; Žvanut, B. Translation and validation of the clinical learning environment, supervision and nurse teacher scale (CLES+T) in Croatian language. Nurse Educ. Pract.; 2016; 19, pp. 48-53. [DOI: https://dx.doi.org/10.1016/j.nepr.2016.05.001]
50. Iyigun, E.; Tastan, S.; Ayhan, H.; Pazar, B.; Tekin, Y.E.; Coskun, H.; Saarikoski, M. The Clinical Learning Environment, Supervision and the Nurse Teacher Evaluation Scale: Turkish Version. Int. J. Nurs Pract.; 2020; 26, e12795. [DOI: https://dx.doi.org/10.1111/ijn.12795]
51. Atay, S.; Kurt, F.Y.; Aslan, G.K.; Saarikoski, M.; Yılmaz, H.; Ekinci, V. Validity and reliability of the Clinical Learning Environment, Supervision and Nurse Teacher (CLES+T), Turkish version1. Rev. Lat. Am. Enfermagem.; 2018; 26, e3037. [DOI: https://dx.doi.org/10.1590/1518-8345.2413.3037]
52. Žvanut, B.; Lovrić, R.; Kolnik, T.Š.; Šavle, M.; Pucer, P. A Slovenian version of the «clinical learning environment, supervision and nurse teacher scale (Cles+T)» and its comparison with the Croatian version. Nurse Educ. Pract.; 2018; 30, pp. 27-34. [DOI: https://dx.doi.org/10.1016/j.nepr.2018.02.009]
53. Mueller, G.; Mylonas, D.; Schumacher, P. Quality assurance of the clinical learning environment in Austria: Construct validity of the Clinical Learning Environment, Supervision and Nurse Teacher Scale (CLES+T scale). Nurse Educ. Today; 2018; 66, pp. 158-165. [DOI: https://dx.doi.org/10.1016/j.nedt.2018.04.022] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29704703]
54. Wong, W.K.; Bressington, D.T. Psychometric properties of the clinical learning environment, Supervision and Nurse Teacher scale (CLES+T) for undergraduate nursing students in Hong Kong. Nurse Educ. Pract.; 2021; 52, 103007. [DOI: https://dx.doi.org/10.1016/j.nepr.2021.103007]
55. Zhao, R.; Xiao, L.; Watson, R.; Chen, Y. Clinical learning environment, supervision and nurse teacher scale (CLES+T): Psychometric evaluation of the Chinese version. Nurse Educ. Today; 2021; 106, 105058. [DOI: https://dx.doi.org/10.1016/j.nedt.2021.105058] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34274749]
56. Ozbicakci, S.; Yesiltepe, A. The Cles+T Scale in Primary Health Care Settings: Methodological Study. Int. J. Caring Sci.; 2022; 15, pp. 1211-1217.
57. Guejdad, K.; Ikrou, A.; Strandell-Laine, C.; Abouqal, R.; Belayachi, J. Clinical learning environment, supervision and nurse teacher (CLES+T) scale: Translation and validation of the Arabic version. Nurse Educ. Pract.; 2022; 63, 103374. [DOI: https://dx.doi.org/10.1016/j.nepr.2022.103374] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35690005]
58. Wang, J.; Zang, S.; Shan, T. Dundee Ready Education Environment Measure: Psychometric testing with Chinese nursing students. J. Adv. Nurs.; 2009; 65, pp. 2701-2709. [DOI: https://dx.doi.org/10.1111/j.1365-2648.2009.05154.x]
59. Rotthoff, T.; Ostapczuk, M.S.; De Bruin, J.; Decking, U.; Schneider, M.; Ritz-Timme, S. Assessing the learning environment of a faculty: Psychometric validation of the German version of the Dundee Ready Education Environment Measure with students and teachers. Med. Teach.; 2011; 33, pp. e624-e636. [DOI: https://dx.doi.org/10.3109/0142159X.2011.610841]
60. Gosak, L.; Fijačko, N.; Chabrera, C.; Cabrera, E.; Štiglic, G. Perception of the Online Learning Environment of Nursing Students in Slovenia: Validation of the DREEM Questionnaire. Healthcare; 2021; 9, 998. [DOI: https://dx.doi.org/10.3390/healthcare9080998]
61. Arribas-Marín, J.; Hernández-Franco, V.; Plumed-Moreno, C. Nursing students’ perception of academic support in the practicum: Development of a reliable and valid measurement instrument. J. Prof. Nurs. Off. J. Am. Assoc. Coll. Nurs.; 2017; 33, pp. 387-395. [DOI: https://dx.doi.org/10.1016/j.profnurs.2017.03.001]
62. Baptista, R.C.N.; Martins, J.C.A.; Pereira, M.F.C.R.; Mazzo, A. Students’ satisfaction with simulated clinical experiences: Validation of an assessment scale. Rev. Lat. Am. Enfermagem.; 2014; 22, pp. 709-715. [DOI: https://dx.doi.org/10.1590/0104-1169.3295.2471] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25493664]
63. Montejano-Lozoya, R.; Gea-Caballero, V.; Miguel-Montoya, I.; Juárez-Vela, R.; Sanjuán-Quiles, Á.; Ferrer-Ferrandiz, E. Validación de un cuestionario de satisfacción sobre la formación práctica de estudiantes de Enfermería. Rev. Lat. Am. Enfermagem; 2019; 27, pp. 1-9.
64. Sand-Jecklin, K.E. Student Evaluation of Clinical Education Environment (SECEE): Instrument Development and Validation; West Virginia University: Morgantown, WV, USA, 1998.
65. Govina, O.; Vlachou, E.; Lavdaniti, M.; Kalemikerakis, I.; Margari, N.; Galanos, A.; Kavga, A. Psychometric Testing of the Student Evaluation of Clinical Educational Environment Inventory in Greek Nursing Students. Glob. J. Health Sci.; 2017; 9, 241. [DOI: https://dx.doi.org/10.5539/gjhs.v9n5p241]
66. Orton, H.D. Ward learning climate and student nurse response. Nurs. Times; 1981; 77, pp. (Suppl. 17), 65-68.
67. Roff, S.; McAleer, S.; Harden, R.M.; Al-Qahtani, M.; Ahmed, A.U.; Deza, H.; Groenen, G.; Primparyon, P. Development and validation of the Dundee ready education environment measure (DREEM). Med. Teach.; 1997; 19, pp. 295-299. [DOI: https://dx.doi.org/10.3109/01421599709034208]
68. Sand-Jecklin, K. Assessing nursing student perceptions of the clinical learning environment: Refinement and testing of the SECEE inventory. J. Nurs. Meas.; 2009; 17, pp. 232-246. [DOI: https://dx.doi.org/10.1891/1061-3749.17.3.232]
69. Fraser, B.J.; Fisher, D.L. Assessment of Classroom Psychosocial Environment. Workshop Manual. 1983; Available online: https://eric.ed.gov/?id=ED228296 (accessed on 27 February 2023).
70. Kolb, D.A. Experiential Learning: Experience as the Source of Learning and Development; FT Press: Upper Saddle River, NJ, USA, 2014.
71. Anderson, B. A perspective on changing dynamics in nursing over the past 20 years. Br. J. Nurs.; 2010; 19, pp. 1190-1191. [DOI: https://dx.doi.org/10.12968/bjon.2010.19.18.79055] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/20948477]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Background: Nursing education consists of theory and practice, and student nurses’ perception of the learning environment, both educational and clinical, is one of the elements that determines the success or failure of their university study path. This study aimed to identify the currently available tools for measuring the clinical and educational learning environments of student nurses and to evaluate their measurement properties in order to provide solid evidence for researchers, educators, and clinical tutors to use in the selection of tools. Methods: We conducted a systematic review to evaluate the psychometric properties of self-reported learning environment tools in accordance with the Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) Guidelines of 2018. The research was conducted on the following databases: PubMed, CINAHL, APA PsycInfo, and ERIC. Results: In the literature, 14 instruments were found that evaluate both the traditional and simulated clinical learning environments and the educational learning environments of student nurses. These tools can be ideally divided into first-generation tools developed from different learning theories and second-generation tools developed by mixing, reviewing, and integrating different already-validated tools. Conclusion: Not all the relevant psychometric properties of the instruments were evaluated, and the methodological approaches used were often doubtful or inadequate, thus threatening the instruments’ external validity. Further research is needed to complete the validation processes undertaken for both new and already developed instruments, using higher-quality methods and evaluating all psychometric properties.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
; Ricci, Simona 1 ; Guarente, Luca 3
; Latina, Roberto 4 ; Covelli, Giuliana 1 ; Pozzuoli, Gianluca 1 ; Maddalena De Maria 3 ; Giovanniello, Dominique 5 ; Rocco, Gennaro 6 ; Stievano, Alessandro 7 ; Sabatino, Laura 6 ; Ippolito Notarnicola 6 ; Gualandi, Raffaella 8 ; Tartaglini, Daniela 8 ; Ivziku, Dhurata 8
1 UOC Care to the Person, Local Health Authority Roma 2, 00159 Rome, Italy
2 Clinical Direction, Fondazione Policlinico Universitario Campus Bio-Medico, 000128 Rome, Italy
3 Department of Biomedicine and Prevention, University Tor Vergata, 00133 Rome, Italy
4 Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialities, University of Palermo, 90127 Palermo, Italy
5 Department of Traslational Medical Sciences, University of Campania “Luigi Vanvitelli”, 81100 Caserta, Italy
6 Centre of Excellence for Nursing Scholarship, Order of Nurses of Rome, 00136 Rome, Italy
7 Department of Clinical and Experimental Medicine, University of Messina, 98100 Messina, Italy
8 Department of Health Professions, Fondazione Policlinico Universitario Campus Bio-Medico, 000128 Rome, Italy




