Content area
Full Text
[…] I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen
(Richard Feynman, 1974).
The progress of a scientific discipline rests upon an iterative process whereby accumulating empirical evidence is used to inform judgments about whether a particular hypothesis, theory or model should be accepted, modified or rejected. This process works best – and progress in the field proceeds most rapidly – when researchers are exposed to both confirming and disconfirming evidence for any particular hypothesis, theory or model. Disconfirming evidence is particularly useful because it allows for the potential refinement or even falsification of hypotheses, theories and models, and also because it allows for the development of boundary conditions – an important element of any scientific theory (Yammarino and Dubinsky, 1994). Unfortunately, perceived or actual journal norms have resulted in a tendency for researchers in the organizational science to hide potentially disconfirming evidence and primarily present evidence that is supportive of a theory, model or hypothesis (Yong, 2012), even when such apparently supporting evidence is the result of questionable analytic decisions.
In this paper, we describe seven analytic practices and reporting practices relating to the testing of measurement models via confirmatory factor analysis (CFA) that reduce the degree to which readers are exposed to disconfirming evidence. Following the terminology used by John et al. (2012) and later by Banks et al. (2016), we use the umbrella term questionable research practices (QRPs) to refer to this set of practices, although it is important to acknowledge that these practice range from those that are widely engaged in and simply reflect an unintentional failure to present potentially disconfirming evidence, to those that are unambiguously problematic because they result in the presentation of results that are mathematically impossible. Some of the practices that we describe here have been discussed elsewhere (e.g. Cortina et al., 2017; Green et al., 2016) but others have not been previously discussed and our own reading of the CFA-based literature in management journals suggests that these practices are remarkably widespread. Indeed,...