About the Authors:
Mariano González-Pérez
Contributed equally to this work with: Mariano González-Pérez, Rosario Susi, Ana Barrio, Beatriz Antona
Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing
* E-mail: [email protected]
Affiliation: Faculty of Optics and Optometry, Universidad Complutense de Madrid, Madrid, Spain
ORCID logo http://orcid.org/0000-0001-5967-5000
Rosario Susi
Contributed equally to this work with: Mariano González-Pérez, Rosario Susi, Ana Barrio, Beatriz Antona
Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Supervision, Visualization, Writing – review & editing
Affiliation: Faculty of Statistical Studies, Universidad Complutense de Madrid, Madrid, Spain
Ana Barrio
Contributed equally to this work with: Mariano González-Pérez, Rosario Susi, Ana Barrio, Beatriz Antona
Roles Conceptualization, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – review & editing
Affiliation: Faculty of Optics and Optometry, Universidad Complutense de Madrid, Madrid, Spain
Beatriz Antona
Contributed equally to this work with: Mariano González-Pérez, Rosario Susi, Ana Barrio, Beatriz Antona
Roles Conceptualization, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – review & editing
Affiliation: Faculty of Optics and Optometry, Universidad Complutense de Madrid, Madrid, Spain
Introduction
Many of today’s jobs involve prolonged computer use. This increases a worker's visual demands and may give rise to an array of computer-related visual and ocular symptoms (CRVOS) that adversely affect both quality of life[1] and productivity[2]. These symptoms, also referred to as computer vision syndrome, digital eyestrain, or occupational asthenopia, may be divided into two main groups[3–5]: internal or visual symptoms like blurred vision and diplopia and external or ocular symptoms like burning eyes and dry eyes.
Up until 2014, investigating CRVOS had the difficulty that there was no standardized, objective, reproducible and validated instrument for measuring these symptoms[6] and many studies used the questionnaire developed by Hayes et al.[1] which test-retest study is known[7] but whose psychometric properties, like validity or measurement precision, are unknown. In 2014, we presented the first scale for this purpose, the computer-vision symptom scale (CVSS17)[8] and in 2015 another Spanish group reported the computer vision syndrome questionnaire (CVS-Q)[9] which factor structure and/or levels of severity are unknown. The comparison of both questionnaires’ (CVSS17 and CVS-Q) psychometric properties shows that CVSS17 overcomes the main limitation of the CVS-Q -a suboptimal item-person targeting- and shows higher measurement precision.
The CVSS17, developed, validated and scored through Rasch analysis, was designed to provide a patient-reported measure of CRVOS among video display terminal (VDT) workers. The scale contains 17 items with scores ranging from 17 to 53 (a higher score indicates a greater level of symptoms) and is available in Spanish, English and Italian at http://www.cvss17.com. Any person can freely access the questionnaire and obtain a score right away. CVSS17 ensures construct validity and provides measures as a linear interval scale, measuring CRVOS without the main limitations of previously developed instruments [8]. Besides the construct validity, we studied [8, 10] different types of the CVSS17´s validity by assessing the association between the CVSS17 scores and other closely related questionnaires, like the Visual Discomfort Scale (VDS)[11] and the Ocular Surface Disease Index (OSDI)[12]. In addition, we tested the convergent validity[10] finding an association by which CVSS17 scores are higher as the amplitude of accommodation decrease and when the difference between the accommodative amplitude and the amplitude, measured by push-down method, increase.
To be useful, data provided by CVSS17 should be easy to interpret and actionable[13], i.e. scores guide diagnostic or therapeutic actions/decision making[14]. In respect to this topic, providing evidence-based thresholds that identify levels of symptoms is useful for the clinical management of patient [15] as it helps in the score interpretation[16].
Rasch analysis provides the measure and standard error corresponding to every possible raw score and can be used to determine how many statistically different levels exist across the score range[17], in questionnaires assessing symptoms these levels of performance represent the grades of symptoms severity measured. Therefore, this analysis provides proper cutoff values to transform CVSS17 scores into categorical.
The main factors of the CVSS17 were identified by conventional factor analysis in the initial validation of the scale[8]. Accordingly, CVSS17 shows a two-factor structure similar to other models proposed for CRVOS measured either by Hayes questionnaire[3] or by a checklist composed by some common CRVOS[18]; one factor is related to ocular dryness and the other is generally caused by refractive, accommodative or vergence anomalies[5]. However, to the best of our knowledge, the domain structure of these prior models has not been examined to verify whether their factors represent independent latent traits [19]. This is an important point in CROVS analysis because the strategies for diagnosis, management and/or research of these problems may differ depending on the relative importance of each main factor. In addition, grouping scale items within domains is essential because they can form subscales that allow for the assessment of CRVOS at more specific levels[20].
The present study was therefore designed to: 1) determine the number of different performance levels of CVSS17 using the method proposed by Wright [17] because is a sample-independent method more suitable for clinical samples, typically skewed to healthy or “sick” people, than the separation ratio provided by Rasch analysis which assumes that the test is targeted on the sample; 2) confirm through discriminant analysis if the main factors described in our prior work are capable on their own to classify subjects according to these performance levels; 3) to validate through Rasch analysis CVSS17 factors as independent latent traits able to measure the specific principal components of CRVOS.
Methods
The study protocol adhered to the tenets of the Declaration of Helsinki and was approved by the Research Ethics Committee of the Hospital Clínico San Carlos (Madrid, Spain). We obtained electronic informed consent the participants before accessing the questionnaire’s webpage and before sending the responses.
All p-values provided are two-tailed. Significance was set at p<0.05.
Participants
The following subjects were invited to complete the CVSS17 online over the period May 2012 to November 2013: the members of a trade union (Unión General de Trabajadores), the partners of a health and safety at work organization (Grupo OTP- Prevención de Riesgos Laborales), the workers of a private company (SIEMENS) and of a public entity (Spanish National Institute of Statistics, INE).
Subjects were 18 to 65 years old, spoke Spanish, and fulfilled the definition of a VDT worker established by the Instituto Nacional de Seguridad e Higiene en el Trabajo (INSHT, Spanish Institute of Health and Safety at Work)[21]. Further inclusion criteria were no ocular disease or medication that could affect their vision. When fewer than 12 items (two-thirds) were answered and/or person outfit was over 2.5 the corresponding questionnaires were excluded from the analysis[8, 20]. Subjects indicating they were over 39 years of age were considered presbyopes. Furthermore, we took into account neither the refractive status nor any accommodative or binocular problems in the participants’ recruitment.
822 subjects agreed to complete the CVSS17 and, after applying inclusion and exclusion criteria, 26 questionnaires with Outfit > 2.5 were excluded so 796 were finally analyzed (age: 43.9 ± 10 years; 58.04% female; 35.66% non-presbyopes).
Rasch analysis
The Rasch model is an item response theory (IRT) model. The model transforms raw scores to preserve the distance between the locations of two persons regardless of the particular items administered. The main IRT concept is that a mathematical model is used to predict the probability of a person successfully replying to an item according to the person’s ability and item difficulty[22]. Since the selected items were polychotomous, for Rasch analysis we had to choose between the partial credit model (PCM, which considers a different rating scale for each item) and the Andrich rating scale model (RSM, which assumes equal category thresholds across items). We chose PCM for the reasons provided in our previous paper [8].
Levels of performance
We used the PCM results provided by WINSTEPS software (ver. 3.92.1, Chicago, IL) to estimate the CVSS17 measures (in logits) and standard errors corresponding to every possible raw score. We used these data to compute the number of significantly different levels of performance according to the methods proposed by Wright[17].
Confirmatory factor analysis
We used the IBM SPSS Statistics package version 22.0 (Statistical Package for Social Sciences) for factor analysis of data from those subjects who answered every item to confirm the factorial structure of the items in the scale.
In the exploratory factor analysis of the CVSS17, published in 2014[8], the structure of the factor model was unknown; rather, data helped us to reveal or to identify the simplified structure given by the items in CVSS17. In the factor analysis presented in this paper, on the other hand, the precise structure of the factor model is hypothesized or known and would be confirmed with this analysis[23].
Discriminant analysis (DA) is a statistical multivariate technique useful to classify observation into different groups. The objective of DA is to identify the minimum number of discriminant functions that will provide most of the discriminations among the groups[23]. Given a set of independent variables, functions discriminates between individuals and allocates each of them to a group defined by a dependent categorical variable. Therefore, the basic problem in discriminant analysis is to assign an unknown subject to one of two or more groups on the basis of a multivariate observation[24]. DA differs from group building techniques in that the groups must be known in advance. This is the appropriate method if the independent variables are metric and the dependent variable is non metric[23]. Moreover, it is possible to determine how successful the classification is.
Thus, we used DA to examine if the two factors of the CVSS17 identified in the exploratory factor analysis could accurately classify subjects according to the previously defined levels of performance. To this end, subjects were first grouped according to their level of symptom severity. Then, the DA was developed for the obtained factors (independent variables) to confirm if factors discriminates between different severity levels (dependent variable). A large proportion of subjects correctly assigned to each group was taken to indicate the high discrimination power of the CVSS17.
Domain structure assessment
For PCM analysis of each of the two CVSS17 domains (main factors), we used WINSTEPS software to assess the following properties for each domain:
1. Item fit statistics. Both Infit and Outfit mean square fit statistics show the extent to which the items in the domain comply with Rasch model expectation[25].
2. Dimensionality. The scale is considered unidimensional when there is one latent variable of interest, and the level of this latent variable is the focus of measurement[22]. Two parameters derived from principal component analysis (PCA) of standardized residuals are used for this assessment: the amount of raw variance explained by the measure and the eigenvalue of the unexplained variance in the first contrast[25].
3. Person separation index (PSI) and levels of performance. Rasch-based PSI is a reliability indicator, analogous to Cronbach’s α of traditional test theory in both values and construction[26]. This index was obtained through WINSTEPS. Levels of performance were computed as described above.
4. Targeting. This was established as the difference between the average difficulty of the items and subjects’ mean level of symptoms [25].
5. Differential Item Functioning (DIF). We examined each main factors’ items to check there was no difference in the way subgroups (male–female; presbyopes–nonpresbyopes) answered each item (i.e., no DIF). We used the DIF analysis implemented in WINSTEPS based on two methods:
1. Mantel-Haenszel method to estimate (log-) odds DIF size and significance from cross-tabs of observations in the two groups.
2. Logit-difference (logistic regression) method to estimate the difference between Rasch item difficulties for the two groups, maintaining everything else constant[27].
The quality of the data obtained in the domain structure assessment stage (except levels of performance) was assessed according to the criteria of the guidelines proposed by Khadka et al.[25] for quality assessment of ophthalmologic questionnaires.
Results
CVSS17 scores were mean 31.31, median 31.0, minimum 17.0, maximum 50.0 and their standard deviation was 7.65. The 95% confidence interval for the population mean was 30.78 to 31.84. PCM summary statistics are provided in S1 Appendix.
The two main factors described by CVSS17 factor analysis was named in accordance to our previous paper, ESF and ISF[8].
We used the responses of the 600 subjects who answered every item to analyze the differences in CVSS17, ESF and ISF scores according to gender and age group (non-presbyope women, non-presbyope men, presbyope women and presbyope men) by Kruskal-Wallis test followed by Dunn's multiple comparisons test, because K-S test indicated a normal distribution neither for CVSS17 scores nor for the main factors’ scores. Kruskal-Wallis test showed that there was a significant difference between the analyzed groups for CVSS17 (H: 37.01, p<0.001), ESF (H: 34.08, p<0.001) and ISF (H: 33.51, p<0.001). According to median values (Table 1) and Dunn’s test results (Table 2), presbyope women showed significant higher values for CVSS17, ESF and ISF scores. No more significant differences were found.
[Figure omitted. See PDF.]
Table 1. Descriptive statistics by age and gender group for CVSS17, External Symptom Factor (ESF) and Internal Symptom Factor (ISF).
https://doi.org/10.1371/journal.pone.0202173.t001
[Figure omitted. See PDF.]
Table 2. p-values obtained in Dunn’s post hoc tests for pairwise comparisons results.
https://doi.org/10.1371/journal.pone.0202173.t002
Levels of performance
Rasch analysis revealed 5.8 different levels of performance and a level reliability of 0.97 (see S2 Appendix for details). CVSS17 performance levels (symptom severity grades) and subject distributions across these levels are detailed in Table 3.
[Figure omitted. See PDF.]
Table 3. CVSS17 levels of performance (symptoms severity).
https://doi.org/10.1371/journal.pone.0202173.t003
As only seven subjects were allocated to the top level (level 6), levels 5 and 6 were collapsed so five levels were finally defined (Fig 1).
[Figure omitted. See PDF.]
Fig 1. Plot of the estimated measure for any CVSS17 raw score.
Plot of the estimated measure (x-axis) for any raw CVSS17 score (y-axis). Different symbols represent distinct levels of performance as indicated in the figure inset.
https://doi.org/10.1371/journal.pone.0202173.g001
Confirmatory factor analysis
For this analysis, we selected questionnaires without missing responses corresponding to 600 subjects (age: 44.4 ± 10; 59.0% female; 33.2% non-presbyopes). CVSS17 scores for these subjects were mean 30.87, median 30.0, minimum 17.0, and maximum 50.0; standard deviation was 7.82.
First, as the Bartlett's sphericity test showed significance, the Kaiser-Meyer-Olkin (KMO) index was used to verify if our data were suitable for factor analysis. As KMO was 0.94, factor analysis was performed and the number of principal components determined by selecting factors with eigenvalues over one. A two-factor structure (rotated component matrix is shown in Table 4 and a graph showing the factor loading for each one of the principal components in Fig 2) accounted for 53.87% of the total variability.
[Figure omitted. See PDF.]
Fig 2. Factor loadings for CVSS17 principal components.
Plot of the factor is loading for Factor 1 (external symptom factor, horizontal axis) against Factor 2 (internal symptom factor, vertical axis) for each of the CVSS17 items. Item descriptors are shown in Table 4.
https://doi.org/10.1371/journal.pone.0202173.g002
[Figure omitted. See PDF.]
Table 4. Rotated components matrix.
https://doi.org/10.1371/journal.pone.0202173.t004
Once we had identified the principal components, a univariate descriptive analysis was conducted by calculating the means and standard deviations of the scale's main factors separately for each performance group (Table 5).
[Figure omitted. See PDF.]
Table 5. Univariate descriptive analysis results.
https://doi.org/10.1371/journal.pone.0202173.t005
As shown in Table 5, mean factor values differed across performance levels and their variability is less than 1 in most cases, indicating that the scale's main factors could be good at discriminating between groups. This result is confirmed with the DA concluding that the mean of at least one pair of groups are significantly different (p<0.05 for both factors). Fig 3 shows the factor loadings for each subject according to subject level of performance.
[Figure omitted. See PDF.]
Fig 3. Discriminant analysis scatter plot of the two factors model.
Discriminant analysis scatter plot of the two factors model. Recoded external factor scores (x-axis) are plotted against recoded internal factor scores (y-axis). Different grey intensities represent distinct subject levels of performance as indicated in the figure inset.
https://doi.org/10.1371/journal.pone.0202173.g003
In addition, the discriminant functions obtained from the main factors were able to correctly classify 98.3% of the cases examined.
Domain structure assessment
According to previously used nomenclature [4, 18], hereafter items with a Factor 1 loading (see Table 4) over 0.5 are referred to as an ESF and items with a Factor 2 loading over 0.5 as an ISF. Rasch analysis results are provided separately for ESF and ISF:
Rasch analysis results for ESF.
Infit and outfit mean squares were 0.99 and 1.00 respectively for persons, and 1.00 and 1.01 respectively for items. The eigenvalue of the unexplained variance in the first PCA contrast was 1.71 and raw variance explained by measures was 53%. PSI was 2.61 and there were 4.7 statistically different levels of performance (Fig 4). The difference between the average difficulty of the items and subjects was -0.41 logits. DIF for gender and age group was under 0.5 logits for all items. Table 6 shows our quality assessment of these results.
[Figure omitted. See PDF.]
Fig 4. Plot of the estimated measure for any ESF raw score.
Plot of the estimated measure (x-axis) for any ESF raw score (y-axis). Different symbols represent distinct levels of performance as indicated in the figure inset.
https://doi.org/10.1371/journal.pone.0202173.g004
[Figure omitted. See PDF.]
Table 6. Quality evaluation of data obtained in the External Symptom Factor (ESF) domain assessment.
https://doi.org/10.1371/journal.pone.0202173.t006
Rasch analysis results for ISF
Infit and outfit mean squares were 0.99 and 0.99, respectively, for persons and 0.99 and 0.99 respectively for items. The eigenvalue of the unexplained variance in the first PCA contrast was 1.78 and raw variance explained by measures was 56.1%. PSI was 1.63 and there were 3.3 statistically different levels of performance (Fig 5). The difference between the average difficulty of the items and subjects was -1.27 logits. DIF for gender and age group was under 0.50 logits for all items. Table 7 shows our quality assessment of these results.
[Figure omitted. See PDF.]
Fig 5. Plot of the estimated measure for any ISF raw score.
Plot of the estimated measure (x-axis) for any ISF raw score (y-axis). Different symbols represent distinct levels of performance as indicated in the figure inset.
https://doi.org/10.1371/journal.pone.0202173.g005
[Figure omitted. See PDF.]
Table 7. Quality evaluation of data obtained in the Internal Symptom Factor (ISF) domain assessment.
https://doi.org/10.1371/journal.pone.0202173.t007
Discussion
This study identified five significantly different levels of symptoms among CVSS17 scores and confirmed the two-factor structure (ESF and ISF) of this scale. By Rasch analysis, it was also observed that ESF and ISF perform well as separate scales.
Because CVSS17 is a PRO instrument without DIF for gender and/or age group, we could directly compare the CRVOS among these subgroups, so our results describing a higher level of CRVOS in women and in presbyopes, despite previously reported[4, 5], are worthy of note and future studies should consider these differences in their analysis and should go depth in the research on the reasons that provokes a higher level of CRVOS in presbyope women.
Our analysis detected 5.8 levels of performance or symptoms across the scale's score range corresponding to a sample-independent reliability of 0.97. For easy interpretation of CVSS17 scores, we propose five levels of symptom severity whereby categories four and five indicate a higher level of subject symptoms than scale difficulty. This means that VDT workers assigned to these levels warrant priority attention. To our knowledge, no similar CRVOS scale shows such a good grading power, which was comparable to that calculated using the same methods for the Chinese Impact of Vision Impairment (IVI) questionnaire[28].
By discriminant analysis, we confirmed that the CVSS17’s main factors (ESF and ISF) can correctly classify subjects’ CRVOS according to their level of symptom severity. A reduced blink rate associated with computer use, a high cognitive load and low contrast reading conditions lead to ESF symptoms[29–31], while refractive errors, glare, accommodation system stress[29] and increased convergence[31] may cause ISF symptoms. As mentioned previously [8], other authors [3, 18] have proposed similar bifactorial models with differences in the factors assigned to eyestrain and photophobia. To explain these differences, it should be noted that the items included in a questionnaire determine factor composition. Sheedy et al[3]. selected nine symptoms measured using visual analogue scales (VAS) on a study sample of twenty students and University staff members, while Portello et al[3]. measured symptoms among VDT workers with the questionnaire developed by Hayes et al.[1] based on clinical findings and questionnaires used in the care of computer-using patients. The items included in CVSS17 were selected through Rasch analysis conducted on a population of VDT workers. Item A33 related to light-induced ocular discomfort showed a similar ESF and ISF factor loading. Thus, it could be that the pathophysiological mechanism that produces ocular discomfort associated with bright light may have components of both ESF and ISF.
ESF items are able to assess dry-eye symptoms related to computer use among VDT workers. In fact, when comparing their measurement properties against those recently described for the Ocular Comfort Index (OCI), Ocular Surface Disease Index (OSDI) and McMonnies Questionnaire [19] it emerged that ESF had benefits including better-fit statistics, a lower eigenvalue of the first PCA contrast, higher person separation and better items-person targeting. Accordingly, ESF could be the best option to assess dry eye symptoms in VDT workers. However, clinical research is still needed to confirm its separate performance by assessing other properties like repeatability and convergent validity. In addition, to help clinicians managing CRVOS, more research is needed to precise the relationship between clinical findings and the levels of severity and/or the subscales described in the present paper besides the associations described by us in a previous work between CVSS17 scores and some clinical measures that are summarized in Table 8.
[Figure omitted. See PDF.]
Table 8. Coefficients of correlation between CVSS17 and some clinical measures, previously reported [10].
https://doi.org/10.1371/journal.pone.0202173.t008
The results displayed on Table 8 provide some evidence about the CVSS17’s concurrent validity but we still need to determine the way in which CRVOS vary as some clinical measures, like uncorrected refractive errors or tear film osmolarity, change. To do so, researchers need to compare valid and reliable clinical data against a valid and reliable model of CRVOS, like the one provided by the CVSS17.
According to the PSI, ISF measurement precision was below the acceptable limit given our participant distribution was skewed towards the less symptomatic part of the scale. However, by assessing reliability using the Wright method, we confirmed that ISF could distinguish three strata so at least it can discriminate between high and low performers. We therefore propose this subscale could be useful to assess internal symptoms among VDT workers. Notwithstanding, its reliability and persons-item targeting could be improved by adding more low-difficulty items.
Although more work is needed to compare changes in CVSS17 scores indicating clinically meaningful variations[32], our data suggest that a change in CVSS17 performance level may be perceived by a subject. The results showed in the present paper indicate that, besides the CRVOS level provided by the CVSS17, ESF or ISF are valid measures when defining the optimal assessment strategy and/or treatment for any patient. Table 9 depicts an example of subscales-guided clinical decisions taken from real CVSS17 scores, where Person 1 and Person 2 score is the same, 37, but just looking at their subscales’ scores the clinician would focus on dry eye when dealing with Person 1 but would consider ocular refraction, accommodation and/or vergence anomalies when caring for Person 2.
[Figure omitted. See PDF.]
Table 9. Example for subscales’ scores interpretation.
https://doi.org/10.1371/journal.pone.0202173.t009
We have to point out that we assessed neither the participants’ refractive status nor their accommodative or binocular anomalies so an under/overrepresentation of these anomalies in our sample could lead to an under/overestimation of values like the CVSS17 population mean, the median values or the number of subjects in each level of performance but it has no effect in the main findings of the study like the factor structure of the CVSS17 or in the levels of performance cut-off scores.
CVSS17 was originally developed and validated in Spanish. Since its development, other research groups have started its cross-cultural adaptation to English, Italian and Portuguese. For a better understanding of the CVSS17’s items, we provide the English printable versions of CVSS17, ESF and ISF along with the original Spanish versions and their scoring charts (S3, S4, S5, S6, S7, S8, S9, S10 and S11 Appendix). To help clinicians and researchers willing to use the CVSS17, we include a spreadsheet in English (S12 Appendix) and in Spanish (S13 Appendix) that automatically provide the CVSS17, ESF and ISF score after entering manually the answers to the CVSS17 questionnaire. In addition, data used for this research is provided in S1 Dataset.
In conclusion, CVSS17 is a highly reliable PRO (patient reported outcomes) tool to assess CRVOS in VDT workers, with scores defining five different levels of performance. In addition, two main factors (ESF and ISF) were identified through factor analysis. These main factors or subscales were confirmed by discriminant analysis and are consistent with our previous findings. Accordingly, clinicians and/or researchers could separately use the ESF and ISF subscales to assess the specific components of CRVOS.
Supporting information
[Figure omitted. See PDF.]
S1 Appendix. Summary statistics for Rasch analysis.
https://doi.org/10.1371/journal.pone.0202173.s001
(PDF)
S2 Appendix. CVSS17 levels of severity calculation.
https://doi.org/10.1371/journal.pone.0202173.s002
(PDF)
S3 Appendix. Printable English version of the CVSS17.
https://doi.org/10.1371/journal.pone.0202173.s003
(PDF)
S4 Appendix. Printable Spanish version of the CVSS17.
https://doi.org/10.1371/journal.pone.0202173.s004
(PDF)
S5 Appendix. Scoring chart of the CVSS17.
https://doi.org/10.1371/journal.pone.0202173.s005
(PDF)
S6 Appendix. Printable English version of the CVSS17-ESF.
https://doi.org/10.1371/journal.pone.0202173.s006
(PDF)
S7 Appendix. Printable Spanish version of the CVSS17-ESF.
https://doi.org/10.1371/journal.pone.0202173.s007
(PDF)
S8 Appendix. Scoring chart of the CVSS17-ESF.
https://doi.org/10.1371/journal.pone.0202173.s008
(PDF)
S9 Appendix. Printable English version of the CVSS17-ISF.
https://doi.org/10.1371/journal.pone.0202173.s009
(PDF)
S10 Appendix. Printable Spanish version of the CVSS17-ISF.
https://doi.org/10.1371/journal.pone.0202173.s010
(PDF)
S11 Appendix. Scoring chart of the CVSS17-ISF.
https://doi.org/10.1371/journal.pone.0202173.s011
(PDF)
S12 Appendix. Spreadsheet for entering and scoring CVSS17 responses (English).
https://doi.org/10.1371/journal.pone.0202173.s012
(XLSX)
S13 Appendix. Spreadsheet for entering and scoring CVSS17 responses (Spanish).
https://doi.org/10.1371/journal.pone.0202173.s013
(XLSX)
S1 Dataset. Spreadsheet showing each subject’s responses to each CVSS17 item.
https://doi.org/10.1371/journal.pone.0202173.s014
(XLSX)
Acknowledgments
The authors thank UGT (Unión General de Trabajadores), INE (Instituto Nacional de Estadística), Grupo OTP-Prevención de Riesgos Laborales, Fraternidad-Muprespa and Siemens España for the cooperation of their VDT workers.
Citation: González-Pérez M, Susi R, Barrio A, Antona B (2018) Five levels of performance and two subscales identified in the computer-vision symptom scale (CVSS17) by Rasch, factor, and discriminant analysis. PLoS ONE 13(8): e0202173. https://doi.org/10.1371/journal.pone.0202173
1. Hayes J, Sheedy J, Stelmack J, Heaney C. Computer use, symptoms, and quality of life. Optom Vis Sci. 2007;84(8):738–44. pmid:17700327
2. Daum KM, Clore KA, Simms SS, Vesely JW, Wilczek DD, Spittle BM, et al. Productivity associated with visual status of computer users. Optometry. 2004;75(1):33–47. Epub 2004/01/14. pmid:14717279.
3. Portello J, Rosenfield M, Bababekova Y, Estrada J, Leon A. Computer-related visual symptoms in office workers. Ophthalmic Physiol Opt. 2012;32:375–82. pmid:22775070
4. Gowrisankaran S, Sheedy JE. Computer vision syndrome: A review. Work. 2015;52(2):303–14. pmid:26519133.
5. Rosenfield M. Computer vision syndrome: a review of ocular causes and potential treatments. Ophthalmic Physiol Opt. 2011;31(5):502–15. Epub 2011/04/13. pmid:21480937.
6. Vilela M, Pellanda L, Cesa C, Castagno V. Asthenopia Prevalence and Risk Factors Associated with Professional Computer Use-A Systematic Review. International Journal of Advance in Medical Science. 2015;3(2):51–60.
7. Rosenfield M, Hue JE, Huang RR, Bababekova Y. The effects of induced oblique astigmatism on symptoms and reading performance while viewing a computer screen. Ophthalmic & Physiological Optics: The Journal Of The British College Of Ophthalmic Opticians (Optometrists). 2012;32(2):142–8. pmid:22150631.
8. Gonzalez-Perez M, Susi R, Antona B, Barrio A, Gonzalez E. The Computer-Vision Symptom Scale (CVSS17): development and initial validation. Invest Ophthalmol Vis Sci. 2014;55(7):4504–11. Epub 2014/06/19. doi: iovs.13-13818 [pii] pmid:24938516.
9. Seguí MdM, Cabrero-García J, Crespo A, Verdú J, Ronda E. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace. Journal of clinical epidemiology. 2015;68(6):662–73. pmid:25744132
10. González Pérez M. Desarrollo y validación de una escala para medir la sintomatología visual asociada al uso de videoterminales en el trabajo. Ph.D. Thesis, Universidad Complutense de Madrid. 2015. Available from http://eprints.ucm.es/33579/
11. Conlon E, Lovegrove W, Chekaluk E, Pattison P. Measuring visual discomfort. Vis Cogn. 1999;6:637–66.
12. Dougherty BE, Nichols JJ, Nichols KK. Rasch analysis of the ocular surface disease index (OSDI). Investigative ophthalmology & visual science. 2011;52(12):8630–5.
13. Ahmed S, Ware P, Gardner W, Witter J, Bingham CO, Kairy D, et al. Montreal Accord on Patient-Reported Outcomes (PROs) use series–Paper 8: patient-reported outcomes in electronic health records can inform clinical and policy decisions. Journal of clinical epidemiology. 2017;89:160–7. pmid:28433675
14. Kroenke K, Monahan PO, Kean J. Pragmatic characteristics of patient-reported outcome measures are important for use in clinical practice. J Clin Epidemiol. 2015;68(9):1085–92. pmid:25962972; PubMed Central PMCID: PMCPMC4540688.
15. Bingham CO, Noonan VK, Auger C, Feldman DE, Ahmed S, Bartlett SJ. Montreal Accord on Patient-Reported Outcomes (PROs) use series–Paper 4: patient-reported outcomes can inform clinical decision making in chronic care. Journal of clinical epidemiology. 2017;89:136–41. pmid:28433678
16. Snyder CF, Aaronson NK, Choucair AK, Elliott TE, Greenhalgh J, Halyard MY, et al. Implementing patient-reported outcomes assessment in clinical practice: a review of the options and considerations. Qual Life Res. 2012;21(8):1305–14. pmid:22048932.
17. Wright B. Separation, Reliability and Skewed Distributions: Statistically Different Levels of Performance. Rasch Meas Trans. 2001;14(4):786.
18. Sheedy JE, Hayes JN, Engle J. Is all asthenopia the same? Optom Vis Sci. 2003;80(11):732–9. Epub 2003/11/25. pmid:14627938.
19. McAlinden C, Gao R, Wang Q, Zhu S, Yang J, Yu A, et al. Rasch analysis of three dry eye questionnaires and correlates with objective clinical tests. Ocul Surf. 2017;15(2):202–10. pmid:28179131.
20. Lamoureux EL, Pallant JF, Pesudovs K, Hassell JB, Keeffe JE. The Impact of Vision Impairment Questionnaire: an evaluation of its measurement properties using Rasch analysis. Invest Ophthalmol Vis Sci. 2007;47(11):4732–41. pmid:17065481.
21. Instituto Nacional de Seguridad e Higiene en el Trabajo. Guía Técnica para la evaluación y prevención de los riesgos relativos a la utilización de equipos con Pantallas de visualización. Madrid: Ministerio de Trabajo e Inmigración; 2012.
22. Wu M, Adams R. Applying the Rasch model to psycho-social measurement: A practical approach. Melbourne: Educational Measurement Solutions; 2007. 87 p.
23. Sharma S. Applied multivariate techniques: John Wiley & Sons, Inc.; 1995.
24. Lachenbruch PA, Goldstein M. Discriminant analysis. Biometrics. 1979:69–85.
25. Khadka J, McAlinden C, Pesudovs K. Quality Assessment of Ophthalmic Questionnaires: Review and Recommendation. Optom Vis Sci. 2013;90(8):720–44.
26. Marais I, Andrich D. Formalizing dimension and response violations of local independence in the unidimensional Rasch model. J Appl Meas. 2008;9(3):200–15. pmid:18753691.
27. Linacre J. Winsteps® Rasch measurement computer program User's Guide. Beaverton, Oregon: Winsteps.com; 2014.
28. Fenwick EK, Ong PG, Sabanayagam C, Rees G, Xie J, Holloway E, et al. Assessment of the psychometric properties of the Chinese Impact of Vision Impairment questionnaire in a population-based study: findings from the Singapore Chinese Eye Study. Qual Life Res. 2016;25(4):871–80. pmid:26420045.
29. Gowrisankaran S, Nahar NK, Hayes JR, Sheedy JE. Asthenopia and blink rate under visual and cognitive loads. Optom Vis Sci. 2012;89(1):97–104. pmid:22051780.
30. Nahar NK, Gowrisankaran S, Hayes JR, Sheedy JE. Interactions of visual and cognitive stress. Optometry. 2011;82(11):689–96. pmid:21885351.
31. Nahar NK, Sheedy JE, Hayes J, Tai YC. Objective measurements of lower-level visual stress. Optom Vis Sci. 2007;84(7):620–9. pmid:17632311.
32. de Vet HC, Terwee CB, Ostelo RW, Beckerman H, Knol DL, Bouter LM. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change. Health Qual Life Outcomes. 2006;4:54. pmid:16925807; PubMed Central PMCID: PMCPMC1560110.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2018 González-Pérez et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Purpose
To quantify the levels of performance (symptom severity) of the computer-vision symptom scale (CVSS17), confirm its bifactorial structure as detected in an exploratory factor analysis, and validate its factors as subscales.
Methods
By partial credit model (PCM), we estimated CVSS17 measures and the standard error for every possible raw score, and used these data to determine the number of different performance levels in the CVSS17. In addition, through discriminant analysis, we checked that the scale's two main factors could classify subjects according to these determined levels of performance. Finally, a separate Rasch analysis was performed for each CVSS17 factor to assess their measurement properties when used as isolated scales.
Results
We identified 5.8 different levels of performance. Discriminant functions obtained from sample data indicated that the scale's main factors correctly classified 98.4% of the cases. The main factors: Internal symptom factor (ISF) and external symptom factor (ESF) showed good measurement properties and can be considered as subscales.
Conclusion
CVSS17 scores defined five different levels of performance. In addition, two main factors (ESF and ISF) were identified and these confirmed by discriminant analysis. These subscales served to assess either the visual or the ocular symptoms attributable to computer use.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer