Content area
Full Text
1. Introduction
The aim of this research is to investigate the consistency between the criterion validity and the unidimensional validity – case study in management research. Testing instrument validity is very important to get the data (as result of instrument/questionnaire) to be unbiased. If data are biased, it indicates that the result of the research is also biased. Instrument tryout should be done; the result tryout data can be used to investigate the validity of instrument. No valid conclusions exist without valid measurement. Validity can be defined as the agreement between a test score or measure and the quality it is believed to measure. Validity is sometimes defined as the answer to the question: “Does the test measure what it is supposed to measure?” To address this question, we use systematic studies to determine whether the conclusions from test results are justified by evidence. Definitions of validity blossomed, making it hard to determine whether psychologists who referred to different types of validity were really talking about different things. Though validity defined the meaning of tests and measures, the term itself was beginning to lose its meaning (Kaplan and Saccuzoo, 2009; Bollen, 2001).
Validity is the evidence for inferences made about a test score. There are three types of evidence: construct-related, criterion-related and content-related (Raykov, 2008). People have many other names for different aspects of validity, but most aspects can be seen in terms of these categories. The most recent standards emphasize that validity is a unitary concept that represents all of the evidence that supports the intended interpretation of a measure. The consensus document cautions against separating validity into subcategories such as content validity, predictive validity and criterion validity. Though categories for grouping different types of validity are convenient, the use of categories does not imply that there are distinct forms of validity. Sometimes psychologists have been overly rigorous about making distinctions among categories when, indeed, the categories overlap (Dunn-Rankin, 1983).
There are many different types of validity, including content validity, face validity, criterion-related validity (or predictive validity), construct validity, factorial validity, concurrent validity, convergent validity and divergent (or discriminant validity) (Jarvis et al., 2003). Content validity pertains to the degree to which the instrument fully assesses...