Abstract
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to unique types of misidentifications. We developed a C/IER indicator that requires agreement on C/IER identification from multiple behavioral sources, thereby alleviating the effect of each source’s standalone C/IER misidentifications and increasing the robustness of C/IER identification. To this end, we combined a response-pattern-based multiple-hurdle approach with a recently developed screen-time-based mixture decomposition approach. In an application of the proposed multiple-source indicator to PISA 2022 field trial data we (a) showcase how the indicator hedges against (presumed) C/IER overidentification of its constituting components, (b) replicate associations with commonly reported external correlates of C/IER, namely agreement with self-reported effort and C/IER position effects, and (c) employ the indicator to study the effects of changes of scale characteristics on C/IER occurrence. To this end, we leverage a large-scale survey experiment implemented in the PISA 2022 field trial and investigate the effects of using frequency instead of agreement scales as well as approximate instead of abstract frequency scale labels. We conclude that neither scale format manipulation has the potential to curb C/IER occurrence.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
; Buchholz, Janine 2 ; Shin, Hyo Jeong 3 ; Bertling, Jonas 4 ; Lüdtke, Oliver 5 1 University of Oslo, Centre for Educational Measurement, Oslo, Norway (GRID:grid.5510.1) (ISNI:0000 0004 1936 8921); University of Oslo, Centre for Research on Equality in Education, Oslo, Norway (GRID:grid.5510.1) (ISNI:0000 0004 1936 8921); IPN-Leibniz Institute for Science and Mathematics Education, Kiel, Germany (GRID:grid.461789.5)
2 Institute for Educational Quality Improvement (IQB), Berlin, Germany (GRID:grid.461789.5) (ISNI:0000 0001 0279 2505)
3 Sogang University, Seoul, South Korea (GRID:grid.263736.5) (ISNI:0000 0001 0286 5954)
4 Educational Testing Service (ETS), Princeton, USA (GRID:grid.286674.9) (ISNI:0000 0004 1936 9051)
5 IPN-Leibniz Institute for Science and Mathematics Education, Kiel, Germany (GRID:grid.461789.5); Centre for International Student Assessment (ZIB), Munich, Germany (GRID:grid.6936.a) (ISNI:0000000123222966)




