Content area
Full Text
Purpose: The Stuttering Severity Instrument (SSI) is a tool used to measure the severity of stuttering. Previous versions of the instrument have known limitations (e.g., Lewis, 1995). The present study examined the intra- and interjudge reliability of the newest version, the Stuttering Severity Instrument-Fourth Edition (SSI-4) (Riley, 2009).
Method: Twelve judges who were trained on the SSI-4 protocol participated. Judges collected SSI-4 data while viewing 4 videos of adults who stutter at Time 1 and 4 weeks later at Time 2. Data were analyzed for intra- and interjudge reliability of the SSI-4 subscores (for Frequency, Duration, and Physical Concomitants), total score, and final severity rating.
Results: Intra- and interjudge reliability across the subscores and total score concurred with the manual's reported reliability when reliability was calculated using the methods described in the manual. New calculations of judge agreement produced different values from those in the manual-for the 3 subscores, total score, and final severity rating-and provided data absent from the manual.
Conclusions: Clinicians and researchers who use the SSI-4 should carefully consider the limitations of the instrument. Investigation into the multitasking demands of the instrument may provide information on whether separating the collection of data for specific variables will improve intra- and interjudge reliability of those variables.
The reliability of any given measurement tool is critical, regardless of whether the instrument is being used for clinical or research purposes. In its broadest sense, reliability refers to the consistency, or agreement, of scores. With respect to measuring stuttering variables, an important aspect of reliability is whether they are affected by differences in judges (clinicians, scorers, or observers) who are observing and recording various aspects of stuttered speech (percentage of syllables stuttered [%SS], duration of stutter, etc.). Identifying whether a judge can consistently measure such aspects over time (do judges agree with themselves from Time 1 to Time 2?), called intrajudge reliability, and ascertaining whether two different judges return the same outcome or score on a measure, called interjudge reliability, are two ways in which we determine the reliability of an instrument. This article will review various types of reliability discussed in the literature and examine the intra- and interjudge reliability of the Stuttering Severity Instrument-Fourth Edition (SSI-4; Riley, 2009), a commonly used test in stuttering...