Content area

Abstract

The carbon output of computing - from edge devices to the large data centers - must be dramatically reduced. In this respect, Voltage-Frequency Island (VFI) is a well-established design paradigm to create scalable and energy-efficient manycore chips (e.g., CPUs). Voltage/Frequency (V/F) knobs of the VFIs can be dynamically tuned to reduce the energy while maintaining the application’s quality of service (QoS). In the first part of this dissertation, we consider the problem of dynamic power management (DPM) in manycore SoCs and propose novel Machine-learning (ML)-enabled DPM strategies to improve the energy efficiency in von Neumann-based manycore architectures.

Deep Neural Networks (DNNs) and Graph Neural Networks (GNNs) have enabled remarkable advancements in various real-world applications, including natural language processing, healthcare, molecular chemistry, etc. As the complexity of neural network models continues to grow, their intensive computing and memory requirements pose significant performance and energy efficiency challenges for the traditional von Neumann architectures. Processing-in-Memory (PIM)-based computing platforms have emerged as a promising alternative due to their ability to perform computation within the memory itself, thereby reducing data movement and improving energy efficiency. However, communication between PIM-based processing elements (PEs) in a manycore architecture remains a bottleneck. In addition, in-memory computation suffers from device and crossbar non-idealities arising due to temperature, conductance drift, etc. In this dissertation, we address these challenges and propose a design of thermally efficient dataflow-aware Network-on-Chip (NoC) to accelerate DNN inferencing. We also address the reliability, energy, and performance challenges in DNN training and propose a heterogeneous architecture that combines the benefits of multiple PIM devices in a single platform to enable energy-efficient and high-performance DNN training.

Later in this dissertation, we exploit the heterogeneity in the computational kernels behind deep learning models such as DNNs, GNNs, and transformers to design high-performance, energy-efficient, and reliable heterogeneous PIM-based manycore systems for sustainable deep learning.

Overall, we utilize ML to enable the design and resource management of high-performance, energy-efficient, and reliable computing systems spanning from von Neumann to heterogeneous PIM-based architectures.

Details

1010268
Business indexing term
Title
Advances in Machine Learning-Enabled Resource Management in Manycore Systems: From Von Neumann to Heterogeneous Processing-in-Memory Architectures
Number of pages
189
Publication year
2025
Degree date
2025
School code
0251
Source
DAI-B 87/4(E), Dissertation Abstracts International
ISBN
9798297636583
Committee member
Doppa, Janardhan Rao; Bhat, Ganapati
University/institution
Washington State University
Department
School of Electrical Engineering and Computer Science
University location
United States -- Washington
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
31995619
ProQuest document ID
3261569972
Document URL
https://www.proquest.com/dissertations-theses/advances-machine-learning-enabled-resource/docview/3261569972/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic