Content area
Full Text
Multiple regression is a widely used technique for data analysis in social and behavioral research. The complexity of interpreting such results increases when correlated predictor variables are involved. Commonality analysis provides a method of determining the variance accounted for by respective predictor variables and is especially useful in the presence of correlated predictors. However, computing commonality coefficients is laborious. To make commonality analysis accessible to more researchers, a program was developed to automate the calculation of unique and common elements in commonality analysis, using the statistical package R. The program is described, and a heuristic example using data from the Holzinger and Swineford ( 1939) study, readily available in the MBESS R package, is presented.
Multiple regression is a widely used technique for data analysis in social and behavioral research (Fox, 1991 ; Huberty, 1989). It is a method for determining the amount of variance of two or more predictor variables on a criterion variable. These predictor variables are often correlated, increasing the complexity of interpreting results (Pedhazur, 1997; Zientek & Thompson, 2006).
Stepwise regression is often used in educational and psychological research to evaluate the order of importance of variables and to select useful subsets of variables (Huberty, 1989; Thompson, 1995). Pedhazur (1997) suggested that stepwise regression methods provide researchers with a methodology with which to determine a predictor's individual meaningfulness as it is introduced into the regression model. However, stepwise regression can lead to serious Type I errors (Thompson, 1995), and the selection/entry order into the model can "drastically" misrepresent a variable's usefulness (Kerlinger, 1986, p. 543).
Commonality analysis provides an effective alternative for determining the variance accounted for by respective predictor variables (Onwuegbuzie & Daniel, 2003; Rowell, 1996). Also called element analysis, commonality analysis was developed in the 1960s as a method of partitioning variance (R^sup 2^) into unique and nonunique parts (Mayeske et al., 1969; Mood, 1969,1971 ; Newton & Spurrell, 1967). This has important implications, because theory advancement and research findings' usefulness
depend not only on establishing that a relationship exists among predictors and the criterion, but also upon determining the extent to which those independent variables, singly and in all combinations, share variance with the dependent variable. Only then can we fully know the relative importance of...