Content area
Full Text
Combining/weighting subscores into an aggregate score involves issues that apply to many fields in the organizational sciences (e.g., weighting predictors in selection, weighting multiple performance appraisal indicators, overall evaluation of organizations). The weights that are used in practice can be different (differential weights) or equal (unit weights). Relevant literature across multiple disciplines and multiple decades is reviewed. The literature indicates that unit weights have substantial predictive validity when compared with regression weights, but there is a lack of data on how other differential weighting strategies (e.g., weights generated by subject matter experts) compare to unit weights. In response, a primary and a meta-analytic study are provided here. The recent literature also contains some potential criticisms of unit weights in regard to personnel selection and content validation-and those statements are evaluated. The data and findings indicate that unit weights can be a highly appropriate approach for weighting under many circumstances.
Keywords: unit weights; selection; composites; content validity
The task of combining multiple pieces of information in order to make predictions or arrive at aggregate/composite scores has confronted statisticians, researchers, and applied professionals for decades (e.g., Dawes & Corrigan, 1974; Einhorn & Hogarth, 1975; Meehl, 1965; Schmidt, 1971; Seashore, 1957, Wilks, 1938). The need to generate such combinations occurs in a variety of subdisciplines in the organizational sciences.
For example, in a context that we develop in more detail below, human resource selection systems often involve the collection of multiple pieces of information about applicants (e.g., multiple test scores). There is then a need to combine all of this information to make a selection recommendation about each applicant (cf. Gatewood & Feild, 2001; Guion, 1991). Similar issues apply even within a single selection instrument (e.g., how to weight items/exercises within a test). In addition, the assessment of eventual overall job performance also generally requires the aggregation/combination of scores and data. That is, a supervisor might have information about a subordinate's quantity of task performance, quality of task performance, and a perception about the individual's organizational citizenship. The organization may wish to provide an overall evaluation of the subordinate, which would require weighting these criterion components (see Gatewood & Feild, for a discussion of what has been traditionally called "the multiple criterion" problem). As another example of...