Content area

Abstract

Though deep neural networks achieve great accuracy in visual recognition tasks, they contain millions of weights and thus require a large space to be stored. In this dissertation, we focus on compressing different types of deep neural networks in different situations. First, we present a novel deep compression method, Octave Deep Compression (ODC), to compress Octave Convolutional Networks with in-parallel pruning-quantization on different frequencies. Second, we propose a novel unstructured pruning pipeline, Attention-based Simultaneous sparse structure and Weight Learning (ASWL), where an efficient algorithm is proposed to calculate the pruning ratios layer-wisely from attentions, and both weights for the dense network and the sparse network are tracked so that the pruned structure is simultaneously learned from randomly initialized weights. Third, we focus on compressing and accelerating deep GCN models with residual connections using structured pruning by presenting AgileGCN. Specifically, in each residual structure of a deep GCN, channel sampling and padding are applied to the input and output channels of a convolutional layer, respectively, to significantly reduce its floating point operations (FLOPs) and number of parameters. Fourth, we propose a novel framework, Transferring Lottery Ticket (TLT), to adapt both masks and weights of a pre-trained and pruned network dynamically during the knowledge transfer to downstream tasks. Recent work has shown that pruned networks can also be used as pre-trained models in transfer learning. Finally, we propose MAGNET, a novel modality-agnostic network for 3D medical image segmentation, which is specifically designed to handle real medical situations where multiple modalities/sequences are available during model training, but fewer are available or used at the time of clinical practice.

Details

1010268
Business indexing term
Title
On Compressing Deep Neural Networks
Number of pages
138
Publication year
2025
Degree date
2025
School code
0254
Source
DAI-B 87/4(E), Dissertation Abstracts International
ISBN
9798297602762
Advisor
Committee member
Zhu, Dongxiao; Kotov, Alexander; Chinnam, Ratna
University/institution
Wayne State University
Department
Computer Science
University location
United States -- Michigan
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
31938650
ProQuest document ID
3257330958
Document URL
https://www.proquest.com/dissertations-theses/on-compressing-deep-neural-networks/docview/3257330958/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic