We have thoroughly reviewed the article titled “Analysis of the Use of Sample Size and Effect Size Calculations in a Temporomandibular Disorders Randomised Controlled Trial—Short Narrative Review” by Zieliński G and Gawda P, recently published in the Journal of Personalized Medicine (2024; 14: 655) [1]. We commend the authors on their insightful review and would like to contribute further to the discussion.
The authors highlight a significant trend in dental research publications and emphasize the critical importance of enhancing research quality. They underscore the pivotal role of statistical analysis, particularly in sample size calculation and effect size estimation, which are crucial for determining both statistical and clinical significance. Their short narrative review specifically analyzes these aspects in randomized controlled trials of temporomandibular disorders (TMDs).
We are writing to address concerns regarding Table 2 [1], which outlines the findings on the reporting of sample size calculations and effect size in the reviewed studies. We would like to draw attention to a specific point of the paper regarding the following study “Education-Enhanced Conventional Care versus Conventional Care Alone for Temporomandibular Disorders: A Randomized Controlled Trial” by Aguiar et al. (2024) [2], which received a score of 0 for sample size calculation. However, through a careful look at the methods section of the study conducted by Aguiar et al. it is possible to find a detailed description on how the sample size calculation was obtained. A protocol related to that study was published in the Trials journal, which also described how the sample size was calculated [3]. Hence, we respectfully request a retraction of the mistake related to the study of Aguiar et al. [2].
Additionally, the authors discussed the relationship between effect size estimation, statistical significance, and clinical application. The authors made a direct relation between the effect size and clinically meaningful changes. Nevertheless, the authors did not discuss that Cohen’s effect sizes, while widely used, pose challenges due to their arbitrary thresholds [4]. The terms “small”, “medium”, and “large” are relative, not only to each other but also to the area of behavioral science or even particularly to the specific field and research method being employed in any given investigation [5]. Moreover, alternative methods exist for assessing effect sizes in interventions, including the correlation between variables, regression coefficients, mean differences, or the risk of specific events occurring [6]. Unfortunately, these approaches were not reported as valid means to evaluate outcomes in clinical trials.
To assess the practical significance of a result, it is not enough to know the effect size [7]. Effect magnitude must be interpreted to extract meaning [7]. Effects by themselves are meaningless unless they can be contextualized against some frame of reference [7]. Whether an effect size should be interpreted as small, medium, or large depends on its substantive context and its operational definition [7]. Researchers should interpret the substantive significance of their results by grounding them in a meaningful context or by quantifying their contribution to knowledge, and Cohen’s effect size descriptions can be helpful but only as a starting point [7]. It is important to consider that an effect is influenced by when it occurs, where it occurs, and for whom it occurs [7].
Another important aspect to be considered is that to extract meaning from results, scientists need to look beyond p values and effect sizes and make informed judgments about what they see [7]. No one is better placed to do this than the researcher who collected and analyzed the data [8]. The fact that most published effect sizes go uninterpreted shows that many researchers are either unable or reluctant to take this final step [7]. Most are far more comfortable with the pseudo-objectivity of null hypothesis significance testing than with making subjective yet informed judgments about the meaning of results [7].
It has been highlighted that assessing the magnitude of reported differences in clinical trials also involves considering clinical relevance and patient impact [4]. Determining whether differences are clinically meaningful requires awareness of the context of current evidence [9] within the research field, which serve as a practical guide in evaluating study outcomes [4].
To enhance study quality, clarity in statistical analysis must be coupled with efforts to facilitate the clinical application of research findings [5]. This includes integrating probable effect sizes in a meaningful context, encompassing treatment costs, duration, expectations, and potential risks [4]. An interesting proposal that combines some of those elements to assess meaningful clinical effect is the smallest worthwhile effect which can be defined as the minimum benefit of an intervention that patients consider worthwhile given the costs, risks, and inconveniences [10,11].
We appreciate your attention to these issues, and we are confident that addressing these issues will undoubtedly benefit our scientific community.
Thank you for your consideration.
Conceptualization, T.C.C. and J.H.P.S.; writing—original draft preparation, T.C.d.L.; writing—review and editing, A.C.d.J.C., R.B.R.P. and L.R.G. All authors have read and agreed to the published version of the manuscript.
The authors declare no Conflits of Interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
1. Zieliński, G.; Gawda, P. Analysis of the Use of Sample Size and Effect Size Calculations in a Temporomandibular Disorders Randomised Controlled Trial—Short Narrative Review. J. Pers. Med.; 2024; 14, 655. [DOI: https://dx.doi.org/10.3390/jpm14060655]
2. Aguiar, A.D.S.; Moseley, G.L.; Bataglion, C.; Azevedo, B.; Chaves, T.C. Education-Enhanced Conventional Care versus Conventional Care Alone for Temporomandibular Disorders: A Randomized Controlled Trial. J. Pain; 2023; 24, pp. 251-263. [DOI: https://dx.doi.org/10.1016/j.jpain.2022.09.012] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36220481]
3. Dos Santos Aguiar, A.; Bataglion, C.; Felício, L.R.; Azevedo, B.; Chaves, T.C. Additional effect of pain neuroscience education to craniocervical manual therapy and exercises for pain intensity and disability in temporomandibular disorders: A study protocol for a randomized controlled trial. Trials; 2021; 22, 596. [DOI: https://dx.doi.org/10.1186/s13063-021-05532-x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34488856]
4. Kamper, S.J. Interpreting Outcomes 3-Clinical Meaningfulness: Linking Evidence to Practice. J. Orthop. Sports Phys. Ther.; 2019; 49, pp. 677-678. [DOI: https://dx.doi.org/10.2519/jospt.2019.0705] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31475627]
5. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Routledge: New York, NY, USA, 1988; ISBN 978-1-134-74270-7
6. Israel, H.; Richter, R.R. A Guide to Understanding Meta-analysis. J. Orthop. Sports Phys. Ther.; 2011; 41, pp. 496-504. [DOI: https://dx.doi.org/10.2519/jospt.2011.3333] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21725192]
7. Ellis, P.D. The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results; Cambridge University Press: Cambridge, UK, 2010; ISBN 978-0-521-14246-5
8. Kirk, R.E. Promoting good statistical practices: Some suggestions. Educ. Psychol. Meas.; 2001; 61, pp. 213-218. [DOI: https://dx.doi.org/10.1177/00131640121971185]
9. Moher, D.; Schulz, K.F.; Altman, D.G. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet; 2001; 357, pp. 1191-1194. [DOI: https://dx.doi.org/10.1016/S0140-6736(00)04337-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/11323066]
10. Ferreira, M.L.; Herbert, R.D.; Ferreira, P.H.; Latimer, J.; Ostelo, R.W.; Grotle, M.; Barrett, B. The smallest worthwhile effect of nonsteroidal anti-inflammatory drugs and physiotherapy for chronic low back pain: A benefit-harm trade-off study. J. Clin. Epidemiol.; 2013; 66, pp. 1397-1404. [DOI: https://dx.doi.org/10.1016/j.jclinepi.2013.02.018] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24021611]
11. Ferreira, M. Research Note: The smallest worthwhile effect of a health intervention. J. Physiother.; 2018; 64, pp. 272-274. [DOI: https://dx.doi.org/10.1016/j.jphys.2018.07.008]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Details



1 Graduate Program on Physical Therapy, Department of Physical Therapy, Federal University of São Carlos—UFSCar, São Carlos 13.565-905, SP, Brazil;
2 Department of Physical Therapy, Federal University of São Carlos—UFSCar, Rodovia Washington Luiz, Km 235, São Carlos 13.565-905, SP, Brazil