Full Text

Turn on search term navigation

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

A novel approach is presented in this study for the classification of lower limb disorders, with a specific emphasis on the knee, hip, and ankle. The research employs gait analysis and the extraction of PoseNet features from video data in order to effectively identify and categorize these disorders. The PoseNet algorithm facilitates the extraction of key body joint movements and positions from videos in a non-invasive and user-friendly manner, thereby offering a comprehensive representation of lower limb movements. The features that are extracted are subsequently standardized and employed as inputs for a range of machine learning algorithms, such as Random Forest, Extra Tree Classifier, Multilayer Perceptron, Artificial Neural Networks, and Convolutional Neural Networks. The models undergo training and testing processes using a dataset consisting of 174 real patients and normal individuals collected at the Tehsil Headquarter Hospital Sadiq Abad. The evaluation of their performance is conducted through the utilization of K-fold cross-validation. The findings exhibit a notable level of accuracy and precision in the classification of various lower limb disorders. Notably, the Artificial Neural Networks model achieves the highest accuracy rate of 98.84%. The proposed methodology exhibits potential in enhancing the diagnosis and treatment planning of lower limb disorders. It presents a non-invasive and efficient method of analyzing gait patterns and identifying particular conditions.

Details

Title
Empowering Lower Limb Disorder Identification through PoseNet and Artificial Intelligence
Author
Hafeez Ur Rehman Siddiqui 1   VIAFID ORCID Logo  ; Adil Ali Saleem 1   VIAFID ORCID Logo  ; Muhammad Amjad Raza 1   VIAFID ORCID Logo  ; Santos Gracia Villar 2   VIAFID ORCID Logo  ; Luis Alonso Dzul Lopez 3   VIAFID ORCID Logo  ; Isabel de la Torre Diez 4   VIAFID ORCID Logo  ; Furqan Rustam 5   VIAFID ORCID Logo  ; Dudley, Sandra 6   VIAFID ORCID Logo 

 Institute of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Abu Dhabi Road, Rahim Yar Khan 64200, Punjab, Pakistan; [email protected] (H.U.R.S.); [email protected] (A.A.S.); [email protected] (M.A.R.) 
 Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain; [email protected] (S.G.V.); [email protected] (L.A.D.L.); Universidad Internacional Iberoamericana, Campeche 24560, Mexico; Department of Extension, Universidade Internacional do Cuanza, Cuito EN250, Bié, Angola 
 Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain; [email protected] (S.G.V.); [email protected] (L.A.D.L.); Universidad Internacional Iberoamericana, Campeche 24560, Mexico; Department of Project Management, Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA 
 Department of Signal Theory and Communications and Telematic Engineering, University of Valladolid, Paseo de Belén, 15, 47011 Valladolid, Spain 
 School of Computer Science, University College Dublin, D04 V1W8 Dublin, Ireland 
 Bioengineering Research Centre, School of Engineering, London South Bank University, 103 Borough Road, London SE1 0AA, UK; [email protected] 
First page
2881
Publication year
2023
Publication date
2023
Publisher
MDPI AG
e-ISSN
20754418
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2869304764
Copyright
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.