Abstract

The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few recommendations exist regarding the appropriate use of these tests under varying data conditions. A simulation study was conducted to examine the power and Type I error rates of the confidence interval approach to equivalence testing under conditions of equal and non-equal sample sizes and variability when comparing two and three groups. It was found that equivalence testing performs best when sample sizes are equal. The overall power of the test is strongly influenced by the size of the sample, the amount of variability in the sample, and the size of the difference in the population. Guidelines are provided regarding the use of equivalence test

Details

Title
Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study
Author
Rusticus, Shayna A; Lovato, Chris Y
First page
11
Publication year
2014
Publication date
2014
Publisher
Practical Assessment, Research and Evaluation, Inc.
e-ISSN
15317714
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2366794162
Copyright
© 2014. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://scholarworks.umass.edu/pare/policies.html