Content area

Abstract

In this thesis, we propose an acceleration framework for a class of iterative methods using the Reduced Order Method (ROM). Assuming that the underlying iterative scheme generates a rich basis for the solution space, we construct the next iterate by minimizing the equation error over the linear manifold spanned by this basis. The resulting optimal linear combination yields a more accurate approximation of the solution and significantly enhances convergence. The optimality condition is expressed as a fixed-point equation in a Banach space, and the proposed ROM-accelerated scheme minimizes the sum of residuals by updating the solution via optimal linear combinations of prior iterates. In essence, the method can be seen as a history-based acceleration technique, akin to a delayed or memory-enhanced iterative scheme. This approach effectively remedies semi-ill-posed problems, enabling convergence where standard methods may fail, and also acts as a stabilizing and regularizing mechanism for the original iteration.

Motivated by applications in numerical optimization, variational methods for PDEs, and large-scale AI-driven optimization, we demonstrate the effectiveness of our approach through theoretical analysis and numerical experiments.

Our analysis covers several settings, including variable-step formulations for fixed-point iterations and Newton-like gradient methods. In particular, our approach aligns closely with Anderson acceleration. For linear systems, we interpret Anderson’s method the same as ROM. We focus on variable step-size gradient and quasi-Newton methods, as well as fixed-point schemes enhanced by ROM. These combinations exhibit rapid convergence, especially when the condition number of the matrix A is moderate. The ROM-accelerated variable iteration proves to be nearly optimal, further stabilizing and regularizing convergence. Additionally, we extend our method to a randomized, overlapping Kaczmarz-type scheme for large-scale ill-posed linear systems and analyze the convergence behavior of ROM applied to operator equations. We also investigate Anderson acceleration and its relation to ROM in the context of nonlinear least-squares problems of the form |F(x)|2, formulated in function spaces. Our approach synthesizes with Anderson acceleration, and we interpret Anderson’s method as an approximation of nonlinear ROM. In this setting, the solution is approximated as a linear combination of sequential iterates, and the ROM-based Gauss-Newton method is employed to minimize the equation error over the solution manifold. We further explore applications in constrained convex optimization and discuss the role of ROM in improving conditioning.

We conduct numerical experiments in science and engineering that involve mathematical models. We develop new scenario based tools and refine tools based on applications: a control problem and an indefinite PDE. We develop the Anderson map as a continuation to find fixed points for a positive scalar parameterized Matrix Riccati equation. The use of Anderson type map significantly increases the parameter for a fixed point existence of the quadratic equation. We also demonstrate schemes of variable step update with Anderson map as a matrix free method. We access the matrix by evaluating matrix-vector products. We pay attention to specialized saddle point problems, specifically linear steady state Stokes equation like problems. Our plot based on relative residual suggests Anderson-type acceleration drives convergence efficiently. On a parallel side, we also consider the Neural Network digit classification optimization problems. We demonstrate the applicability of Anderson method for Deep Neural Network (DNN) and Convolution Neural Network (CNN). Deep neural network with a fully connected layer connecting each neuron applies a linear transformation to the input vector through a weights matrix, shifted with a bias matrix. Convolution Neural network enriches data architecture with data compression and filtered decomposition. We use backpropagation to optimize necessary parameters to maximize prediction results. We develop Anderson-type acceleration method for the stochastic descent method with batches and improve the network permanence very much.

Details

1010268
Business indexing term
Title
Reduced Order Method (ROM) and Anderson Acceleration for Iterative Schemes on Least Square Problems and Optimization, With Applications to Neural Networks and Partial Differential Equations (PDEs)
Number of pages
113
Publication year
2025
Degree date
2025
School code
0155
Source
DAI-B 87/4(E), Dissertation Abstracts International
ISBN
9798297633940
University/institution
North Carolina State University
University location
United States -- North Carolina
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32331826
ProQuest document ID
3261650756
Document URL
https://www.proquest.com/dissertations-theses/reduced-order-method-rom-anderson-acceleration/docview/3261650756/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic