Content area
Full Text
An augmented framework for training criteria based on Kirkpatrick's (1959a, 1959b, 1960a, 1960b) model divides training reactions into affective and utility reactions, and learning into post-training measures of learning, retention, and behavior/skill demonstration. A total of 34 studies yielding 115 correlations were analyzed meta-analytically. Results included substantial reliabilities across training criteria and reasonable convergence among subdivisions of criteria within a larger level. Utility-type reaction measures were more strongly related to learning or on-the-job performance (transfer) than affective-type reaction measures. Moreover, utility-type reaction measures were stronger correlates of transfer than were measures of immediate or retained learning. These latter findings support recent concurrent thinking regarding use of reactions in training (e.g., Warr & Bunce, 1995). Implications for choosing and developing training criteria are discussed.
Training researchers agree on the importance of evaluating training (e.g., Goldstein, 1993). There is equally strong agreement among training practitioners on the difficulty of doing so (Carnevale & Schulz, 1990). For any training evaluation to be valuable, however, training criteria must be psychometrically sound, meaningful to decision makers, and must be able to be collected within typical organizational constraints (Tannenbaum & Woods, 1992). Research has revealed that by far the most commonly collected training criteria are trainee reactions (Bassi, Benson, & Cheney, 1996; Saari, Johnson, McLaughlin, & Zimmerle, 1988) which, although easy to collect, may or may not be related to other, often more meaningful indicators for training evaluation.
Perhaps unsurprisingly, Kirkpatrick's four-level model (1959a,1959b, 1960a, 1960b) continues to be the most prevalent framework for categorizing training criteria. This simple taxonomy of training criteria became very popular in business and academia because it addressed a need to understand training evaluation simply yet systematically (Shelton & Alliger, 1993). The model's simplicity is appealing but, as revealed in more recent work, this simplicity is also a liability. Alliger and Janak (1989) conducted a meta-analytic review of the literature based on Kirkpatrick's model. They concluded that:
[Kirkpatrick's model] provides a vocabulary and rough taxonomy for criteria. At the same time, Kirkpatrick's model, through its easily adopted vocabulary and a number of (often implicit) assumptions, can tend to misunderstandings and overgeneralizations (pp. 331-332).
Although there are problems with Kirkpatrick's model, just how best to think about training criteria is not clear. Perhaps Kirkpatrick's taxonomy requires...