Content area
Full Text
J Bus Psychol (2012) 27:99114 DOI 10.1007/s10869-011-9231-8
Detecting and Deterring Insufcient Effort Responding to Surveys
Jason L. Huang Paul G. Curran Jessica Keeney
Elizabeth M. Poposki Richard P. DeShon
Published online: 31 May 2011 Springer Science+Business Media, LLC 2011
AbstractPurpose Responses provided by unmotivated survey participants in a careless, haphazard, or random fashion can threaten the quality of data in psychological and organizational research. The purpose of this study was to summarize existing approaches to detect insufcient effort responding (IER) to low-stakes surveys and to comprehensively evaluate these approaches. Design/Methodology/Approach In an experiment (Study1) and a nonexperimental survey (Study 2), 725 undergraduates responded to a personality survey online. Findings Study 1 examined the presentation of warnings to respondents as a means of deterrence and showed the relative effectiveness of four indices for detecting IE responses: response time, long string, psychometric
antonyms, and individual reliability coefcients. Study 2 demonstrated that the detection indices measured the same underlying construct and showed the improvement of psychometric properties (item interrelatedness, facet dimensionality, and factor structure) after removing IE respondents identied by each index. Three approaches (response time, psychometric antonyms, and individual reliability) with high specicity and moderate sensitivity were recommended as candidates for future application in survey research.
Implications The identication of effective IER indices may help researchers ensure the quality of their low-stake survey data.
Originality/value This study is a rst attempt to comprehensively evaluate IER detection methods using both experimental and nonexperimental designs. Results from both studies corroborated each other in suggesting the three more effective approaches. This study also provided convergent validity evidence regarding various indices for IER.
Keywords Careless responding Random responding
Inconsistent responding Online surveys Data screening
Datasets in social science research are prone to contain numerous errors representing inaccurate responses, inaccurate coding, or inaccurate computation. It is widely recommended that researchers screen the data to identify and correct these inaccurate observations before their use in modeling and hypothesis testing procedures (Babbie 2001; Hartwig and Dearing 1979; Kline 2009; Smith et al. 1986; Tukey 1977; Wilkinson and the Task Force on Statistical Inferences 1999). Fortunately, a number of techniques exist to facilitate the treatment of simple data errors,
An earlier version of this manuscript was presented at the Annual Conference of Academy of Management,...