Full text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Artificial intelligence has experienced tremendous growth in various areas of knowledge, especially in computer science. Distributed computing has become necessary for storing, processing, and generating large amounts of information essential for training artificial intelligence models and algorithms that allow knowledge to be created from large amounts of data. Currently, cloud services offer products for running distributed data training, such as NVIDIA Deep Learning Solutions, Amazon SageMaker, Microsoft Azure, and Google Cloud AI Platform. These services have a cost that adapts to the needs of users who require high processing performance to perform their artificial intelligence tasks. This study highlights the relevance of distributed computing in image processing and classification tasks using a low-scalability distributed system built with devices considered obsolete. To this end, two of the most widely used libraries for the distributed training of deep learning models, PyTorch’s Distributed Data Parallel and Distributed TensorFlow, were implemented and evaluated using the ResNet50 model as a basis for image classification, and their performance was compared with modern environments such as Google Colab and a recent Workstation. The results demonstrate that even with low scalability and outdated distributed systems, comprehensive artificial intelligence tasks can still be performed, reducing investment time and costs. With the results obtained and experiments conducted in this study, we aim to promote technological sustainability through device recycling to facilitate access to high-performance computing in key areas such as research, industry, and education.

Details

Title
Low-Scalability Distributed Systems for Artificial Intelligence: A Comparative Study of Distributed Deep Learning Frameworks for Image Classification
Author
Rivera-Escobedo, Manuel 1 ; López-Martínez Manuel de Jesús 1   VIAFID ORCID Logo  ; Solis-Sánchez, Luis Octavio 2   VIAFID ORCID Logo  ; Guerrero-Osuna, Héctor Alonso 3   VIAFID ORCID Logo  ; Vázquez-Reyes Sodel 3   VIAFID ORCID Logo  ; Acosta-Escareño, Daniel 3   VIAFID ORCID Logo  ; Olvera-Olvera, Carlos A 1   VIAFID ORCID Logo 

 Laboratorio de Invenciones Aplicadas a la Industria (LIAI), Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico; [email protected] 
 Laboratorio de Sistemas Inteligentes de Visión Artificial, Posgrado en Ingeniería y Tecnología Aplicada, Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico; [email protected] 
 Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico; [email protected] (H.A.G.-O.); [email protected] (S.V.-R.); [email protected] (D.A.-E.) 
First page
6251
Publication year
2025
Publication date
2025
Publisher
MDPI AG
e-ISSN
20763417
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3217723248
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.