Content area

Abstract

In recent times, we have seen the use of artificial intelligence in our daily lives. It helps us solve complicated problems. Some of these problems can be large and complex, requiring large models. As models grow in complexity, they require more computations and energy to be trained and tested. The execution of these models relies on floating-point arithmetic, which imposes constraints due to its finite precision. Due to these limitations, many of these computations are not exact. When this happens, computers are forced to round or approximate. We can use several number formats to circumvent this issue. For example, in single precision, we are allowed to use 24 binary digits, and in double precision, we are allowed to use 53 bits of precision. We can also explore small formats like FP8, which could have 3 or 4 bits of precision. The importance of choosing the right format can drastically reduce the resources needed and allow us to increase or decrease the precision depending on the model’s performance.

As it propagates through the model, the error caused by rounding is compounded across the different layers and may have an impact on the model’s final prediction. If we can analyze the rounding errors, we are then able to increase or decrease the model’s precision to better optimize the resources and predictions. If we notice almost no error, we are then able to reduce the precision, optimizing the time and memory needed. In this work, we contributed by developing a software that uses the PyTorch C++ API to load and analyze the impact of the rounded error produced. We tested our software not only with standard forward-feeding models, but with deep learning models as well. We built this by using our implementation of the tensor core that allows custom floating-point operations to be performed. With this class, we can produce the relative error, absolute error, and an upper and lower bound of where the final answer may be. 

Details

1010268
Business indexing term
Title
Analyzing the Impact of Approximate Arithmetic on Deep Neural Network Predictions
Number of pages
72
Publication year
2025
Degree date
2025
School code
0459
Source
MAI 87/2(E), Masters Abstracts International
ISBN
9798290993232
Committee member
Ceberio, Martine; Frias, Marcelo; Volkova, Anastasia; Revol, Nathalie
University/institution
The University of Texas at El Paso
Department
Computer Science
University location
United States -- Texas
Degree
M.S.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32170517
ProQuest document ID
3241059194
Document URL
https://www.proquest.com/dissertations-theses/analyzing-impact-approximate-arithmetic-on-deep/docview/3241059194/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic