1. Introduction
The split feasibility problem (SFP) can be expressed in the following manner:
(1)
where C is a closed convex set and Q is a non-convex closed set. A is a linear mapping from to . The split feasibility problem (SFP) has been applied to address a diverse array of real-world challenges, including image denoising [1], CT image reconstruction [2], intensity-modulated radiation therapy (IMRT) [3,4,5], and Pareto front navigation in multi-criteria optimization [6]. Additionally, numerous iterative approaches have been used to address the split feasibility problem (SFP) [7,8,9,10]. The majority of existing techniques are designed for convex sets, but satisfying convexity requirements remains challenging in practice. Therefore, the primary objective of this paper will be the split feasibility problem (SPF), where the sets involved are not always convex.The alternating direction method of multipliers (ADMM) [11] is a crucial method for solving the separable linear constrained problem, as follows:
(2)
where f: is a proper lower semi-continuous function, g: is smooth, . The augmented Lagrangian function for problem (2) is where is a Lagrangian multiplier and β > 0 is a penalty parameter. The classic iterative format of ADMM for solving problem (2) is as follows:In recent years, research on the theory and algorithms of ADMM has become relatively comprehensive [12,13,14,15,16]. ADMM has been widely applied for solving convex optimization problems. However, when the objective function is non-convex, ADMM may not converge. To address this issue, we transform problem (1) into a separable problem with linear constraints, which makes it easier to solve. Two functions play a crucial role: the indication function and the distance function. Mathematically, given a non-empty closed set D in the d-dimensional Euclidean space , the indicator function is defined as follows:
The distance function of set C, represented by : is given by
where is obviously proper lower semi-continuous and is smooth. When C is a closed convex set and Q is a non-convex closed set, the non-convex split feasibility problem can be reformulated as follows:(3)
This optimization problem is the sum of two non-negative functions, and their minimum value of zero can only be achieved under the condition of problem (1). Due to C being a convex closed set, f(x) is continuous, differentiable, and gradient-Lipschitz continuous. The augmented Lagrangian function for problem (3) is as follows:
In order to endow the ADMM algorithm with better theoretical properties, a number of researchers have carried out the following studies based on problem (3). Zhao et al. [17] considered the symmetric version of ADMM and selected different relaxation factors and s, adding the intermediate update term of the Lagrange multiplier during the algorithm iteration process:
In addition, the combination of the ADMM algorithm with inertial technology can also significantly improve the performance of the ADMM algorithm in solving non-convex optimization problems. Dang et al. [18] incorporated the inertial technique into each sub-problem of the ADMM algorithm and employed a dual-relaxed term to ensure the convergence of the algorithm.
Based on the previous work, we propose a partially symmetric regularized two-step inertial alternating direction method of multipliers for solving the non-convex split feasibility problem, which has not been extensively studied in the past. The novelty of this paper can be summarized as follows: Firstly, we transform such type of non-convex split feasibility problems into two separable non-convex linear constraint problems for an easier solution. Secondly, we add an intermediate update term for the multipliers throughout the iteration phase and apply the two-step inertial technique to each sub-problem of the Alternating Direction Method of Multipliers (ADMM) algorithm. Lastly, to guarantee the strong convergence of the proposed algorithm for resolving non-convex split feasibility problems, we employ the Kurdyka−Łojasiewicz (KL) property.
The structure of the paper is as follows: The basic concepts, definitions, and related results are described in the second part. The convergence of the algorithm is demonstrated in the Section 3. The fourth part showcases the effectiveness of the algorithms through experiments. The Section 5 presents the main conclusions.
2. Preliminaries
In this article, represents an n dimensional Euclidean space and represents the Euclidean norm. For any , where is a symmetric positive semidefinite matrix. and represent the minimum and maximum eigenvalues of the symmetric matrix G, respectively. Then, . When set is non-empty, for any point , the distance from point y to set Q is defined as . In particular, if Q = ∅, then d(y, Q) = +∞. The domain of function g is denoted as dom g = {}. In the Euclidean space, let C be a non-empty closed subset. The projection on the set C is the operator which is defined as .
Definition 1 ([19]). If function g: satisfies at , then function g is said to be lower semi-continuous at y. If g is lower semi-continuous at every point, then g is called a lower semi-continuous function. Due to Q being a closed set, g(y), y Q is a normal lower semi-continuous function.
([19]). Let the function g: be normal lower semi-continuous.
The Fréchet subdifferentiation of g at y dom g is defined as
The limit subdifferentiation of g at y dom g is defined as
Note: The properties of several subdifferentials (see [19]) are listed as follows:
is a closed convex set, is a closed set. If , then . If is the minimum point of g, then ; if , then y is the stable point of function g. The set of stable points of function g is denoted as crit g. For any y dom g, we get if is normal lower semi-continuous and is continuous differentiable.
is the stable point of the augmented Lagrangian function , y, λ) for problem (1), i.e., , if and only if
([20]). (Kurdyka−Łojasiewicz property) The function is a normal lower semi-continuous function, let , denoted as and g is said to have KL property in If there exists (0, +∞], a certain rain of y and a continuous concave function : [0,)→, such that
.
is continuously differentiable on (0, ) and is also continuous at 0.
.
, all KL inequalities hold:
([21]). (Consistent KL property) Assuming Ω is a compact set and function g: is a normal lower semi-continuous function. If the function g is a constant on Ω and satisfies the KL property at every point in Ω, then there exists such that for any ∈ Ω and any y, they belong to the following intersection
With
In some practical applications, many functions satisfy the KL property, such as semi-algebraic functions, real analytic functions, sub analytic functions, and strongly convex functions, as seen in reference [22].
([23]). If : is a continuous differentiable function and is Lipschitz continuous, then there exists a Lipschitz constant > 0, such that for any x, y , there is:
(Sets and functions in semi-algebra)
If there are a finite number of real polynomial functions , such that
A funtion f: →(−∞, +∞] is called semi-algebraic if its graph
(Cauchy–Schwarz inequality) If and only if x and y are linearly dependent, for , we have .
3. Split Feasibility Problem
3.1. Assumptions
Some assumptions and conditions about problem (3) are listed below.
(1). and .
The solution set gram of this inequality system is represented as , where
andNote: It can be seen that (γ, s) has a wide range of choices. Specifically, when occurs, the parameters γ and s of the proposed algorithm can take the same value in this interval.
(2).
(3). Note , where,
(4). Note and . and are fixed constants.
(5). C, Q are both semi-algebraic sets.
(6). f is lg-Lipschitz differentiable, i.e.,
(7). g is proper lower semi-continuous.
(8). The set is bounded.
3.2. Algorithm
For Algorithm 1 (PSRTADMM), its optimal conditions are as follows:
(4)
(5)
(6)
(7)
Algorithm 1 Partially Symmetric Regularized Two-step Inertial Alternating Direction Method of Multipliers for Non-convex Split Feasibility Problems (PSRTADMM) |
|
3.3. Convergence Analysis
Next, we establish the convergence analysis of the proposed algorithm. Lemma 3 below indicates that the sequence {} monotonically decreases. For ease of analysis, z = (x, y).
If Assumptions holds, then
Firstly, from the optimality conditions of Equations (5) and (7), we can obtain
(8)
And
(9)
On the other hand, by the definition of augmented Lagrangian functions, the optimality condition of Equation (6), Lipschitz continuity, and Lemma 2 of f, we have
(10)
Given is the optimal solution of (4), it follows that
(11)
Therefore, adding Equations (8)–(11), we have
(12)
In addition, it can be obtained from Step 2 and Step 4 of Algorithm 1 that
(13)
(14)
On the other hand, it can be concluded from Step 4 of Algorithm 1 and (6) that:
(15)
Combining the Lipschitz continuity of f, we obtain
(16)
Furthermore, by applying the Cauchy inequality to (16), we have
(17)
So, combining Equations (13), (14) and (17), one has
(18)
Substituting Equation (18) into Equation (12), we have
(19)
That is
(20)
Therefore,
(21)
□If Section 3.1 holds, then the following statements is true:
The sequence is bounded.
is bounded from below and convergent, additionally,
The sequence and have the same limit
(1) Because of the decreasing property of {}, we obtain
(2) As {} is bounded, {} is also bounded, and it has at least one aggregation point. Let be a cluster of {}, and . Because g is a lower semi-continuous function and f is continuous differentiable, then is lower semi-continuous. So , that is has a lower bound. According to Lemma 3, monotonically decreases and then converges. Therefore, Lemma 3, for k = 0, 1, …, n. Let , we have:
As , it follows that .
According to Equation (23), there are , so .
(3) From (2), we have and , so and . Combining the definition of in (21) yields . Then, the lemma has been proven.
Utilize the outcomes of Lemma 4 to demonstrate the global convergence of the PSRTADMM algorithm. □
(Global convergence) Denote the set of the cluster points of the sequence and by and , respectively. We have:
and are non-empty compact sets and .
if and only if
, is convergent and
.
(1) It is easy to prove by the definition of and Ω.
(2) From Lemma 4 and the definitions of {} and {}, we can easily reach the conclusion.
(3) Let . Therefore, there exists , such that . According to (17) and the continuity of f, we derive
Taking monotonicity of {} into account, we obtain that {} is convergent. Thus,
(4) Let . And critL is the set of critical points of L. Then, there exists a subsequence {} of {} that converges to . According to Lemma 4,. From the equations in Step 2 and Step 4, let k = → +∞ and take the limit
Combining , it can be seen that .
So, and are the feasible points for problem (1). According to the PSRTADMM algorithm, the y subproblem has
Combining , with . Also, due to the lower semi-continuity of g(y), there is . Sd g is lower semi-continous, . It follows that .
Furthermore, combining the closeness of ∂g and the continuity f, under the optimality necessary condition, let k = is
Therefore, according to Definition 3, . Therefore, Ω ⊆ . □
If the assumptions hold, then there exists a constant C > 0, such that
According to the result of Equation (16), there exists a constant > 0, so
(22)
By the definition of the augmented Lagrangian function (·), we have
(23)
Combining the necessary conditions for optimality (4)–(7) and (23), there are
So, . Therefore, ∀k ≥ 1, there exists > 0, with
(24)
Based on the synthesis of Equations (22) and (24), it can be concluded that there exists a constant > 0, ∀k ≥ 2, which has
□If Section 3.1 (5) holds, then the augmented Lagrangian function satisfies the KL property.
Because C is a semi-algebraic set, the projection is defined by polynomial constraints. Thus, is a semi-algebraic function.
As Q is a semi-algebraic set, its indicator function is a semi-algebraic function.
In addition, the terms are polynomial functions, hence semi-algebraic.
Therefore, is a semi-algebraic function. According to Lemma 1, the augmented Lagrangian function satisfies the KL property.
The strong convergence of the PSRTADMM algorithm is established using Lemmas 3, 5 and 6, and the relevant conclusions in Theorem 1. □
(Strong convergence) Suppose that Section 3.1 holds,satisfies the KL property at each point of, then
(1). .
(2). {} converges to the stable point of L(.).
(1) Theorem 1 means that we have . Now, there are two situations for analysis:
There exists an integer , such that . By Lemma 3, ∀k ≥ , have
Assume ∀k ≥ 1, we have . Due to , for any given ε > 0, there exists > 0, for k > , such that < ε. For any given > 0, there exists > 0, for k > , such that () < () + . Therefore, for a given ε, > 0, when k > k = max{,},
Theorem 1 states that as {} is limited, Ω is a non-empty compact set and (·) is a constant on Ω. Therefore, using Lemma 1, let ∀k > we have
Because , therefore,
(25)
Due to the convexity of function , there are
So,
(26)
Combining Equation (25), (26) and Lemma 5, there are
(27)
Note .
For simplicity, let ∧k = , combining Lemma 3 and Equation (21), , there is
(28)
According to Equation (28), there are
Furthermore, there is
(29)
From Equation (29), the sum of k = + 1, …, m is obtained
Notice the value of , move the term in the equation above, and apply Cauchy inequality to make m → +∞ to have
So, , . Combined with Equation (26), there are
Additionally, we have noticed that
So,
(2) From (1), we know that {} is a Cauchy sequence and therefore converges, and then from Theorem 1 (3) knowing that {} converges to the stable point of sequence (·) as a whole. □
If Section 3.1 and the following conditions (1)–(3) hold,
g is coercive, i.e., .
relaxation factor .
function has a lower bound and is coercive, i.e.,
Then, the sequence { } generated by PSRTADMM is bounded.
According to monotonically decreasing , combined with (15) has
Furthermore, the fact that is a normal lower semi-continuous forcing function on a closed set is readily apparent. Consequently, there is
Therefore, it is easy to determine that is bounded, so is bounded, and the combination of combination (10) proves is bounded as well. The boundedness of has thus been established. Certificate completion. □
4. Numerical Experiments
This section presents a numerical example to validate the efficacy of Algorithm 1 by addressing the split feasibility problem. The variables and constraints in the experiments exactly match those defined in the theoretical part, thereby ensuring that the experiments can effectively test the validity and practicality of the theory. In the experimental stage, we compare Algorithm 1 with the traditional proximal algorithm to demonstrate how superior the performance of Algorithm 1 is when dealing with the split feasibility problem. All codes were implemented using MATLAB R2021a on a desktop computer with 32 GB of RAM.
In the field of CT image reconstruction, the central objective of inverse problems is to restore an image from noisy CT data. This process can be regarded as the following inverse problem:
(30)
where is the observed data, is the true (original) underlying image, represents the measurement error (Gaussian noise at level ), and is the Radon transform in X-Ray computer tomography (CT).In view of the fact that Equation (30) often exhibits ill-posedness and there are significant difficulties in solving it; in order to ensure the stability of the solutions, we conduct an investigation into the following model:
(31)
where F(x) is the regularization term dependent on prior knowledge of images, G(x) is the fidelity term and α > 0 is a balancing regularization parameter. In this experiment, we consider and , which is a widely used fidelity term in CT reconstruction. The term Dix indicates the discrete gradient of x at pixel I and the sum plays the role of total variation (TV) regularization for x. The operator Di at the pixel horizontal gradient operator and vertical gradient operator. Thus, it is easy to rewrite the model (31) as SFP (1) by setting , and y = Hx. The test image is the 96 × 96 Shepp−Logan phantom, corrupted by Gaussian noise with standard deviation σ = 0.05 (equivalent to ε ≈ 4.8 in the feasibility set Q). The parameters that we used for the trials were as follows: , and α = 0.001. The choice of these parameters was primarily based on empirical tuning and theoretical guarantees to ensure the stability and convergence of the algorithm. The settings of and influence the step size and convergence speed of the algorithm, while and control the strength of the inertial effects, which helps accelerate convergence. As a regularization parameter, α balances the fidelity term and the regularization term in image reconstruction, significantly impacting image quality.Figure 1 shows four images: the original sinogram image, the noised sinogram image, the image reconstructed by ADMM, and the image reconstructed by Algorithm 1. A comprehensive comparison of the reconstructed images in Figure 1 shows that Algorithm 1 yields higher-quality images than ADMM. To evaluate image quality more objectively, we used the peak signal-to-noise ratio (PSNR) as a statistical indicator. A higher PSNR value means better image quality, less noise, and a smaller difference between the processed image and the original one. The results show that the PSNR of the image reconstructed by Algorithm 1 is 34.6383, which has a significant advantage over the 33.3561 obtained by the alternating direction method of multipliers (ADMM).
From the relationship between the peak signal-to-noise ratio (PSNR) values and the number of repetitions of the two algorithms presented in Figure 2, it can be clearly observed that the images generated by Algorithm 1 are approaching the original image, and their stability is gradually increasing, eventually achieving convergence to the ideal state. This proves our algorithm is successful.
Note: the presented experiment focuses on a specific CT image reconstruction scenario. Although the results highlight the algorithm’s efficacy in this case, further studies with diverse datasets and problem configurations are necessary to comprehensively evaluate the robustness and generality of PSRTADMM. This constitutes an important direction for future research.
5. Conclusions
In this paper, we put forward a partially symmetric regularized two-step inertial alternating direction method of multipliers to deal with non-convex split feasibility problems. The proposed algorithm innovatively includes an intermediate multiplier update and two-step inertial effects in subproblems. Through theoretical analysis under appropriate assumptions, its global convergence is proven. In addition, when the augmented Lagrangian function satisfies the Kurdyka−Łojasiewicz (KL) property, the algorithm can achieve a strong convergence, which means it can converge to a more accurate solution. Finally, numerical experiments were conducted in the field of CT image reconstruction. The results show that the proposed algorithm outperforms traditional methods in terms of the reconstruction quality and convergence speed, further confirming its effectiveness.
Conceptualization, C.Y. and Y.D.; Methodology, C.Y. and Y.D.; Software, C.Y.; Validation, C.Y.; Writing—original draft, C.Y.; Visualization, C.Y.; Supervision, Y.D.; Project administration, Y.D. All authors have read and agreed to the published version of the manuscript.
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 The recovered sinogram image by two algorithms.
Figure 2 PSNR value of the reconstructed image using two algorithms.
1. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A New Alternating Minimization Algorithm for Total Variation Image Reconstruction. SIAM J. Imaging Sci.; 2008; 1, pp. 248-272. [DOI: https://dx.doi.org/10.1137/080724265]
2. Dong, B.; Li, J.; Shen, Z. X-Ray CT Image Reconstruction via Wavelet Frame Based Regularization and Radon Domain Inpainting. J. Sci. Comput.; 2013; 54, pp. 333-349. [DOI: https://dx.doi.org/10.1007/s10915-012-9579-6]
3. Block, K.T.; Uecker, M.; Frahm, J. Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Magn. Reson. Med.; 2007; 57, pp. 1086-1098. [DOI: https://dx.doi.org/10.1002/mrm.21236]
4. Liu, S.; Cao, J.; Liu, H.; Zhou, X.; Zhang, K.; Li, Z. MRI reconstruction via enhanced group sparsity and nonconvex regularization. Neurocomputing; 2018; 272, pp. 108-121. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.06.062]
5. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med.; 2007; 58, pp. 1182-1195. [DOI: https://dx.doi.org/10.1002/mrm.21391]
6. Gibali, A.; Kuefer, K.H.; Suess, P. Successive Linear Programing Approach for Solving the Nonlinear Split Feasibility Problem. J. Nonlinear Convex Anal.; 2014; 15, pp. 345-353.
7. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl.; 2002; 18, pp. 441-453. [DOI: https://dx.doi.org/10.1088/0266-5611/18/2/310]
8. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl.; 2007; 327, pp. 1244-1256. [DOI: https://dx.doi.org/10.1016/j.jmaa.2006.05.010]
9. Dang, Y.; Gao, Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl.; 2011; 27, 015007. [DOI: https://dx.doi.org/10.1088/0266-5611/27/1/015007]
10. Qu, B.; Wang, C.; Xiu, N. Analysis on Newton projection method for the split feasibility problem. Comput. Optim. Appl.; 2017; 67, pp. 175-199. [DOI: https://dx.doi.org/10.1007/s10589-016-9884-3]
11. Fukushima, M. Application of the alternating direction method of multipliers to separable convex programming problems. Comput. Optim. Appl.; 1992; 1, pp. 93-111. [DOI: https://dx.doi.org/10.1007/BF00247655]
12. Deng, W.; Yin, W. On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers. J. Sci. Comput.; 2016; 66, pp. 889-916. [DOI: https://dx.doi.org/10.1007/s10915-015-0048-x]
13. Wang, Y.; Yin, W.; Zeng, J. Global Convergence of ADMM in Nonconvex Nonsmooth Optimization. J. Sci. Comput.; 2019; 78, pp. 29-63. [DOI: https://dx.doi.org/10.1007/s10915-018-0757-z]
14. Yang, Y.; Jia, Q.S.; Xu, Z.; Guan, X.; Spanos, C.J. Proximal ADMM for nonconvex and nonsmooth optimization. Automatica; 2022; 146, 110551. [DOI: https://dx.doi.org/10.1016/j.automatica.2022.110551]
15. Ouyang, Y.; Chen, Y.; Lan, G.; Pasiliao, E., Jr. An Accelerated Linearized Alternating Direction Method of Multipliers. SIAM J. Imaging Sci.; 2015; 8, pp. 644-681. [DOI: https://dx.doi.org/10.1137/14095697X]
16. Hong, M.; Luo, Z.-Q. On the linear convergence of the alternating direction method of multipliers. Math. Program.; 2017; 162, pp. 165-199. [DOI: https://dx.doi.org/10.1007/s10107-016-1034-2]
17. Zhao, Y.; Li, M.; Pen, X.; Tan, J. Partial symmetric regularized alternating direction method of multipliers for non-convex split feasibility problems. AIMS Math.; 2025; 10, pp. 3041-3061. [DOI: https://dx.doi.org/10.3934/math.2025142]
18. Dang, Y.; Chen, L.; Gao, Y. Multi-block relaxed-dual linear inertial ADMM algorithm for nonconvex and nonsmooth problems with nonseparable structures. Numer. Algorithms; 2025; 98, pp. 251-285. [DOI: https://dx.doi.org/10.1007/s11075-024-01793-0]
19. Rockafellar, R.T.; Wets, R.J.-B. Variational Analysis; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2009.
20. Bolte, J.; Daniilidis, A.; Lewis, A. The Lojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim.; 2007; 17, pp. 1205-1223. [DOI: https://dx.doi.org/10.1137/050644641]
21. Attouch, H.; Bolte, J.; Svaiter, B.F. Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program.; 2013; 137, pp. 91-129. [DOI: https://dx.doi.org/10.1007/s10107-011-0484-9]
22. Wang, F.; Cao, W.; Xu, Z. Convergence of multi-block Bregman ADMM for nonconvex composite problems. Sci. China-Inf. Sci.; 2018; 61, 122101. [DOI: https://dx.doi.org/10.1007/s11432-017-9367-6]
23. Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course; Springer: New York, NY, USA, 2004.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This paper presents a partially symmetric regularized two-step inertial alternating direction method of multipliers for solving non-convex split feasibility problems (SFP), which adds a two-step inertial effect to each subproblem and includes an intermediate update term for multipliers during the iteration process. Under suitable assumptions, the global convergence is demonstrated. Additionally, with the help of the Kurdyka−Łojasiewicz (KL) property, which quantifies the behavior of a function near its critical points, the strong convergence of the proposed algorithm is guaranteed. Numerical experiments are performed to demonstrate the efficacy.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer