Content area

Abstract

Bi-level optimization (BO) has become a fundamental mathematical framework for addressing hierarchical machine learning problems. As deep learning models continue to grow in size, the demand for scalable bi-level optimization solutions has become increasingly critical. Traditional gradient-based bi-level optimization algorithms, due to their inherent characteristics, are ill-suited to meet the demands of large-scale applications. In this paper, we introduce \(\textbf{F}\)orward \(\textbf{G}\)radient \(\textbf{U}\)nrolling with \(\textbf{F}\)orward \(\textbf{F}\)radient, abbreviated as \((\textbf{FG})^2\textbf{U}\), which achieves an unbiased stochastic approximation of the meta gradient for bi-level optimization. \((\text{FG})^2\text{U}\) circumvents the memory and approximation issues associated with classical bi-level optimization approaches, and delivers significantly more accurate gradient estimates than existing large-scale bi-level optimization approaches. Additionally, \((\text{FG})^2\text{U}\) is inherently designed to support parallel computing, enabling it to effectively leverage large-scale distributed computing systems to achieve significant computational efficiency. In practice, \((\text{FG})^2\text{U}\) and other methods can be strategically placed at different stages of the training process to achieve a more cost-effective two-phase paradigm. Further, \((\text{FG})^2\text{U}\) is easy to implement within popular deep learning frameworks, and can be conveniently adapted to address more challenging zeroth-order bi-level optimization scenarios. We provide a thorough convergence analysis and a comprehensive practical discussion for \((\text{FG})^2\text{U}\), complemented by extensive empirical evaluations, showcasing its superior performance in diverse large-scale bi-level optimization tasks. Code is available at https://github.com/ShenQianli/FG2U.

Details

1009240
Title
Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization
Publication title
arXiv.org; Ithaca
Publication year
2024
Publication date
Dec 24, 2024
Section
Computer Science
Publisher
Cornell University Library, arXiv.org
Source
arXiv.org
Place of publication
Ithaca
Country of publication
United States
University/institution
Cornell University Library arXiv.org
e-ISSN
2331-8422
Source type
Working Paper
Language of publication
English
Document type
Working Paper
Publication history
 
 
Online publication date
2024-12-25
Milestone dates
2024-06-20 (Submission v1); 2024-12-24 (Submission v2)
Publication history
 
 
   First posting date
25 Dec 2024
ProQuest document ID
3070859081
Document URL
https://www.proquest.com/working-papers/memory-efficient-gradient-unrolling-large-scale/docview/3070859081/se-2?accountid=208611
Full text outside of ProQuest
Copyright
© 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2024-12-26
Database
ProQuest One Academic