Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 DOI 10.1186/s13660-016-1289-4
Regularized gradient-projection methodsfor nding the minimum-norm solution of the constrained convex minimization problem
http://crossmark.crossref.org/dialog/?doi=10.1186/s13660-016-1289-4&domain=pdf
Web End = Ming Tian1,2* and Hui-Fang Zhang1
*Correspondence: [email protected]
1College of Science, Civil Aviation University of China, Tianjin, 300300, China
2Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin, 300300, China
1 Introduction
Let H be a real Hilbert space with inner product , and norm . Let C be a nonempty closed convex subset of H. Let N and R denote the sets of positive integers and real numbers. Suppose that f is a contraction on H with coecient < < . A nonlinear operator T : H H is nonexpansive if Tx Ty x y for all x, y H. We use Fix(T) to denote the xed point of T.
Firstly, consider the constrained convex minimization problem:
min
xC g(x), (.)
where g : C
R is a real-valued convex function. Assume that the constrained convex minimization problem (.) is solvable, let U denote its solution set. The gradient-projection algorithm (GPA) is an eective method for solving the constrained convex minimization problem (.). A sequence {xn} generated by the following recursive formula:
xn+ = PC(I g)xn, n , (.)
where the parameter is real positive number. In general, if the gradient g is L-Lipschitz continuous and -strongly monotone, < < L , the sequence {xn} generated by (.)
The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 2 of 12
converges strongly to a minimizer of (.). However, if the gradient g is only to be L-ism with L > , < < L, the sequence {xn} generated by (.) converges weakly to a minimizer of (.).
Recently, many authors combined the constrained convex minimization problem with a xed point problem [] and proposed composited iterative algorithms to nd a solution of the constrained convex minimization problem [].
In , Mouda [] introduced the viscosity approximation method for nonexpansive mappings.
xn+ = nf (xn) + ( n)Txn, n . (.)
In , Yamada [] introduced the so-called hybrid steepest-descent algorithm:
xn+ = Txn nFTxn, n , (.)
where F is Lipschitzian and strongly monotone operator. In , Marino and Xu [] considered a generative algorithm:
xn+ = n f (xn) + (I nA)Txn, n , (.)
where A is a strongly positive operator. In , Tian [] combined the iterative algorithm of (.), (.), and proposed a new iterative algorithm:
xn+ = n f (xn) + (I nF)Txn, n . (.)
In , Tian [] generalized (.), obtained the following iterative algorithm:
xn+ = n Vxn + (I nF)Txn, n , (.)
where V is Lipschitzian operator. Based on these iterative algorithms, some authors combined GPA with averaged operator to solve the constrained convex minimization problem [, ].
In , Ceng et al. [] proposed a sequence {xn} generated by the following iterative algorithm:
xn+ = PC nrh(xn) + (I nF)Tn(xn) , n , (.)
where h : C H is an l-Lipschitzian mapping with a constant l > , and F : C H is a k-Lipschitzian and -strongly monotone operator with constants k, > . n = nL,
PC(I ng) = nI + ( n)Tn, n . Then a sequence {xn} generated by (.) converges strongly to a minimizer of (.).
On the other hand, Xu [] proposed that regularization can be used to nd the minimum-norm solution of the minimization problem.
Consider the following regularized minimization problem:
min
xC g(x) := g(x) +
x ,
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 3 of 12
where the regularization parameter > . g is a convex function and the gradient g is
L -ism with L > . Then the sequence {xn} generated by the following formula:
xn+ = PC(I gn)xn = PC I (g + nI) xn, n , (.)
where the regularization parameters < n < , < < L converges weakly. But, if a sequence {xn} dened by
xn+ = PC(I ngn)xn = PC I n(g + nI) xn, n , (.)
where the initial guess x C, {n}, {n} satisfy the following conditions:(i) < n
n(L+n) , n ,
(ii) n (and n ) as n ,(iii)
n= nn = ,(iv) (|
nn|+|nnnn|)
(nn) as n .
Then the sequence {xn} generated by (.) converges strongly to x, which is the minimum-norm solution of (.) [].
Secondly, Yu et al. [] proposed a strong convergence theorem with a regularized-like method to nd an element of the set of solutions for a monotone inclusion problem in a Hilbert space.
Theorem . ([]) Let H be a real Hilbert space and C be a nonempty closed and convex subset of H. Let L > , F is a L -ism mapping of C into H. Let B be a maximal monotone mapping on H and let G be a maximal monotone mapping on H such that the domains of
B and G are included in C. Let J = (I + B) and Tr = (I + rG) for each > and r > . Suppose that (F + B)() G() = . Let {xn} H dened by
xn+ = J I (F + nI) Trxn, n > , (.)
where (, ), n (, ), r (, ). Assume that(i) < a <
+L ,
(ii) limn n = , n= n = .
Then the sequence {xn} generated by (.) converges strongly to x, where x =
P(F+B)()G
()().
From the article of Yu et al. [], we obtain a new condition of parameter , < <
L+ ,
which is used widely in our article. Motivated and inspired by Lin, when < <
L+ , {n}
satisfy certain conditions, a sequence {xn} generated by the iterative algorithm (.):
xn+ = PC I (g + nI) xn, n ,
converges strongly to a point q U, where q = PU() is the minimum-norm solution of the constrained convex minimization problem.
Finally, we give concrete example and the numerical results to illustrate our algorithm is with fast convergence.
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 4 of 12
2 Preliminaries
In this part, we introduce some lemmas that will be used in the rest part. Let H be a real Hilbert space and C be a nonempty closed convex subset of H. We use to denote strong convergence of the sequence {xn} and use to denote weak convergence.
Recall PC is the metric projection from H into C, then to each point x H, the unique point PC C satisfy the property:
x PCx = inf
yC x y =: d(x, C).
PC has the following characteristics.
Lemma . ([]) For a given x H:() z = PCx x z, z y , y C;() z = PCx x z x y y z , y C;
() PCx PCy, x y PCx PCy , x, y H.
From (), we can derive that PC is nonexpansive and monotone.
Lemma . (Demiclosed principle []) Let T : C C be a nonexpansive mapping with F(T) = . If {xn} is a sequence in C weakly converging to x and if {(I T)xn} converges strongly to y, then (I T)x = y. In particular, if y = , then x F(T).
Lemma . ([]) Let {an} is a sequence of nonnegative real numbers such that
an+ ( n)an + nn, n ,
where {n}n= and {n}n= are sequences of real numbers in (, ) and such that(i)
n= n = ;(ii) lim supn n or n= n|n| < . Then limn an = .
3 Main results
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Assume that g : C
R is real-valued convex function and the gradient g is L -ism with L > . Suppose that the minimization problem (.) is consistent and let U denote its solution set. Let < <
L+ , < n < . Consider the following mapping Gn on C dened by
Gnx = PC I (g + nI) x, x C, n
We have
Gnx Gny = PC I (g + nI) x PC I (g + nI) y
I (g + nI) x I (g + nI) y
= ( n) x y + g(x) g(y)
( n) x y, g(x) g(y)
( n) x y + g(x) g(y)
N.
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 5 of 12
L
( n) x y .
Gnx Gny ( n) x y .
Since < n < , it follows that Gn is a contraction. Therefore, by the Banach contraction principle, Gn has a unique xed point xn, such that
xn = PC I (g + nI) xn.
Next, we prove that the sequence {xn} converges strongly to q U, which also solves the variational inequality
q, p q , p U. (.)
Equivalently, q = PU(), that is, q is the minimum-norm solution of the constrained convex minimization problem.
Theorem . Let C be a nonempty closed convex subset of a real Hilbert space H. Let g : C
R is real-valued convex function and assume that the gradient g is L-ism with L > . Assume that U = . Let {xn} be a sequence generated by
xn = PC I (g + nI) xn, n
Let , {n} satisfy the following conditions:
(i) < <
+L ,
(ii) {n} (, ), limn n = , n= n = .
Then {xn} converges strongly to a point q U, where q = PU(), which is the minimum-norm solution of the minimization problem (.) and also solves the variational inequality (.).
Proof First, we claim that {xn} is bounded. Indeed, pick any p U, then we have
xn p = PC I (g + nI) xn PC(I g)p
I (g + nI) xn I (g + nI) p
+ I (g + nI) p (I g)p
( n) xn p + n p .
Then we derive that
xn p p ,
and hence {xn} is bounded.
( n) g(x) g(y)
( n) x y
L(
) g(x) g(y)
That is,
N. (.)
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 6 of 12
Next, we claim that xn PC(I g)xn . Indeed
xn PC(I g)xn = PC I (g + nI) xn PC(I g)xn
I (g + nI) xn (I g)xn
n xn .
Since {xn} is bounded, n (n ), we obtain
xn PC(I g)xn .
g is L-ism. Consequently, PC(I g) is a nonexpansive self-mapping on C. As a matter of fact, we have for each x, y C
PC(I g)x PC(I g)y
(I g)x (I g)y
= x y g(x) g(y)
= x y x y, g(x) g(y) + g(x) g(y)
x y
x y .
{xn} is bounded, consider a subsequence {xni} of {xn}. Since {xni} is bounded, there exists a subsequence {xnij } of {xni} which converges weakly to z. Without loss of generality, we can assume that xni z. Then by Lemma ., we obtain z U.
On the other hand
xn z = PC I (g + nI) xn PC(I g)z
I (g + nI) xn (I g)z, xn z = I (g + nI) xn I (g + nI) z, xn z
+ nz, xn z
( n) xn z + n z, xn z .
Thus
xn z z, xn z .
In particular
xni z z, xni z .
Since xni z. Then we derive that xni z as i .
L
g(x) g(y)
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 7 of 12
Let q be the minimum-norm solution of U, that is, q = PU(). Since {xn} is bounded, there exists a subsequence {xni} of {xn} such that xni z. As the above proof, we know that xni z, z U.
Then we derive that
xn q = PC I (g + nI) xn q
I (g + nI) xn (I g)q, xn q = I (g + nI) xn I (g + nI) q, xn q
+ nq, xn q
( n) xn q + n q, xn q .
Thus
xn q q, xn q .
In particular
xni q q, xni q .
Since xni z, z U,
z q q, z q .
So, we have z = q. From the arbitrariness of z U, it follows that q U is a solution of the variational inequality (.). By the uniqueness of solution of the variational inequality (.), we conclude that xn q as n , where q = PU().
Theorem . Let C be a nonempty closed convex subset of a real Hilbert space H and g : C
R is real-valued convex function and assume that the gradient g is L-ism with L > . Assume that U = . Let {xn} be a sequence generated by x C and
xn+ = PC I (g + nI) xn, n
where and {n} satisfy the following conditions:(i) < <
L+ ;
(ii) {n} (, ), limn n = , n= n = , n= |n+ n| < .
Then {xn} converges strongly to a point q U, where q = PU(), which is the minimum-norm solution of the minimization problem (.) and also solves the variational inequality (.).
Proof First, we claim that {xn} is bounded. Indeed, pick any p U, then we know that, for any n
N,
xn+ p PC I (g + nI) xn PC I (g + nI) p
+ PC I (g + nI) p PC(I g)p
N, (.)
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 8 of 12
( n) xn p + n p
max xn p , p .
By the introduction
xn p max x p , p ,
and hence {xn} is bounded.
Next, we show that xn+ xn .
xn+ xn = PC I (g + nI) xn PC I (g + nI) xn
I (g + nI) xn I (g + nI) xn
= I (g + nI) xn I (g + nI) xn
nxn + nxn
( n) xn xn + |n n| xn
( n) xn xn + |n n| M,
where M = sup{ xn : n
xn+ xn .
Then we claim that xn PC(I g)xn .
xn PC(I g)xn = xn xn+ + xn+ PC(I g)xn
xn xn+ + PC I (g + nI) xn PC(I g)xn
xn xn+ + n xn
xn xn+ + n M,
since n and xn+ xn , we have
xn PC(I g)xn .
Next, we show that
lim sup
n
q, xn q . (.)
Let q be the minimum-norm solution of U, that is, q = PU(). Since {xn} is bounded, without loss of generality, we assume that xnj z. By the same argument as in the proof of Theorem ., we have z U.
lim sup
n
q, xn q = lim
j q, xnj q = q, z q .
N
}. Hence, by Lemma ., we have
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 9 of 12
Then
It follows that
xn+ q ( n) xn q + n q, xn+ q
= ( n) xn q + nn,
where n = q, xn+ q .
It is easy to see that limn n = , n= n = and lim supn n . Hence, by Lemma ., the sequence {xn} converges strongly to q, where q = PU(). This completes the proof.
4 Application
In this part, we will illustrate the practical value of our algorithm in the split feasibility problem. In , Censor and Elfving [] came up with the split feasibility problem. The SFP is formulated as nding a point x with the property:
x C and Ax Q, (.)
where C and Q are nonempty closed and convex subset of real Hilbert spaces H and H, A : H H is bounded linear operator.
Next, we consider the constrained convex minimization problem:
min
xC g(x) =
N. (.)
Let and {n} satisfy the following conditions:(i) < <
+ A
xn+ q = PC I (g + nI) xn PC(I g)q
= PC I (g + nI) xn PC I (g + nI) q, xn+ q
+ PC I (g + nI) q PC(I g)q, xn+ q
( n) xn q xn+ q + n q, xn+ q
n
xn q +
xn+ q +
n q, xn+ q .
Ax PQAx . (.)
If x is a solution of SFP, then Ax Q and Ax PQAx = , x is the solution of the minimization problem (.). The gradient of g is g, where g = A(I PQ)A. Applying
Theorem ., we obtain the following theorem.
Theorem . Assume that the SFP (.) is consistent. Let C be a nonempty closed convex subset of a real Hilbert space H. Assume that A : H H is bounded linear operator,
W = , where W denotes the solution set of SFP (.). Let {xn} be a sequence generated by x C and
xn+ = PC I A(I PQ)A + nI xn, n
min
xC
;
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 10 of 12
(ii) {n} (, ), limn n = , n= n = , n= |n+ n| < . Then {xn} converges strongly to a point q W, where q = PW ().
Proof We only need to show that g is
-ism, then Theorem . can be obtained by
Theorem ..
g = A(I PQ)A.
Since PQ is rmly nonexpansive, so PQ is -averaged mapping, then I PQ is -ism, for any x, y C, we derive that
g(x) g(y), x y = A(I PQ)Ax A(I PQ)Ay, x y = (I PQ)Ax (I PQ)Ay, Ax Ay
(I PQ)Ax (I PQ)Ay
= A
A (I PQ)Ax (I PQ)Ay
=
A
g(x) g(y) .
So, g is
-ism.
5 Numerical result
In this part, we use the algorithm in Theorem . to solve a system of linear equations. Then we calculate the system of linear equations.
Example Let H = H = R. Take
A =
, (.)
b =
. (.)
Then the SFP can be formulated as the problem of nding a point x with the property
x C and Ax Q,
where C = R, Q = {b}. That is, x is the solution of the system of linear equations Ax = b, and
x =
. (.)
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 11 of 12
Table 1 Numerical results as regards Example 1
n x1n x2n x3n x4n En
0 1.0000 1.0000 1.0000 1.0000 5.74E+00 100 1.2292 2.8506 1.8424 4.0887 3.28E01 1,000 1.2208 2.9107 1.8691 4.0722 2.81E01 5,000 1.1128 2.9543 1.9331 4.0369 1.42E01 10,000 1.0298 2.9880 1.9824 4.0097 3.79E02
Table 2 Numerical results as regards Example 1
n x1n x2n x3n x4n En
0 1.0000 1.0000 1.0000 1.0000 3.74E+00 100 0.6070 2.0706 1.7816 3.9672 1.03E+00 1,000 1.0094 2.8884 1.9496 4.0123 1.23E01 5,000 1.0353 2.9643 1.9702 4.0133 5.99E02 10,000 1.0307 2.9769 1.9774 4.0109 4.59E02
Take PC = I, where I denotes the identity matrix. Given the parameters n =
. Then by Theorem ., the sequence {xn} is generated by
xn+ = xn
AAxn +
L+ , our algorithm is with fast convergence.
6 Conclusion
In a real Hilbert space, there are many methods to solve the constrained convex minimization problem. However, most of them cannot nd the minimum-norm solution. In this article, we use the regularized gradient-projection algorithm to nd the minimum-norm solution of the constrained convex minimization problem, where < <
L+ . Then
under some suitable conditions, new strong convergence theorems are obtained. Finally, we apply this algorithm to the split feasibility problem and use a concrete example and numerical results to illustrate that our algorithm has fast convergence.
Competing interests
The authors declare that they have no competing interests.
Authors contributions
All the authors read and approved the nal manuscript.
Acknowledgements
The authors thank the referees for their helping comments, which notably improved the presentation of this paper. This work was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing. First author was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing. Hui-Fang Zhang was supported in part by Technology Innovation Funds of Civil Aviation University of China for Graduate in 2017.
Received: 28 November 2016 Accepted: 24 December 2016
(n+)
for n , =
(n + ) xn.
As n , we have {xn} x = (, , , )T.
From Table , we can easily see that with iterative number increasing xn approaches to the exact solution x and the errors gradually approach zero.
In Tian and Jiao [], they use another iterative algorithm to calculate the same example. Compare Table with Table , we nd that if the parameters n are the same, when
Ab
Tian and Zhang Journal of Inequalities and Applications (2017) 2017:13 Page 12 of 12
References
1. Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for nding xed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286-5302 (2011)
2. Ceng, LC, Ansari, QH, Yao, JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 1(3), 341-359 (2011)
3. Ceng, LC, Ansari, QH, Wen, CF: Multi-step implicit iterative methods with regularization for minimization problems and xed point problems. J. Inequal. Appl. 2013, 240 (2013)
4. Deutsch, F, Yamada, I: Minimizing certain convex functions over the intersection of the xed point sets of the nonexpansive mappings. Numer. Funct. Anal. Optim. 19, 33-56 (1998)
5. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240-256 (2002)6. Xu, HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659-678 (2003)7. Yamada, I, Ogura, N, Yamashita, Y, Sakaniwa, K: Quadratic approximation of xed points of nonexpansive mappings in Hilbert spaces. Numer. Funct. Anal. Optim. 19, 165-190 (1998)
8. Mouda, A: Viscosity approximation methods for xed-points problem. J. Math. Anal. Appl. 241, 46-55 (2000)9. Yamada, I: The hybrid steepest descent method for the variational inequality problem over the intersection of xed point sets of nonexpansive mappings. In: Inherently Parallel Algorithms in Feasibility and Optimization and Their Application, Haifa (2001)
10. Marino, G, Xu, HK: A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 318, 43-52 (2006)
11. Tian, M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. 73, 689-694 (2010)
12. Tian, M: A general iterative method based on the hybrid steepest descent scheme for nonexpansive mappings in Hilbert spaces. In: International Conference on Computational Intelligence and Software Engineering, CiSE 2010, art. 5677064. IEEE, Piscataway, NJ (2010)
13. Tian, M, Liu, L: General iterative methods for equilibrium and constrained convex minimization problem. Optimization 63, 1367-1385 (2014)
14. Tian, M, Liu, L: Iterative algorithms based on the viscosity approximation method for equilibrium and constrained convex minimization problem. Fixed Point Theory Appl. 2012, 201 (2012)
15. Xu, HK: Kim: averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011)16. Yu, ZT, Lin, LJ, Chuang, CS: A unied study of the split feasible problems with applications. J. Nonlinear Convex Anal. 15(3), 605-622 (2014)
17. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)18. Hundal, H: An alternating projection that does not converge in norm. Nonlinear Anal. 57, 35-61 (2004)19. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)20. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)
21. Tian, M, Jiao, SW: Regularized gradient-projection methods for the constrained convex minimization problem and the zero points of maximal monotone operator. Fixed Point Theory Appl. 2015, 11 (2015)
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Journal of Inequalities and Applications is a copyright of Springer, 2017.
Abstract
(ProQuest: ... denotes formulae and/or non-USASCII text omitted; see image)
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Assume that g is a real-valued convex function and the gradient g is ......-ism with ....... Let ......, ....... We prove that the sequence ...... generated by the iterative algorithm ......, ...... converges strongly to ......, where ...... is the minimum-norm solution of the constrained convex minimization problem, which also solves the variational inequality ......, ....... Under suitable conditions, we obtain some strong convergence theorems. As an application, we apply our algorithm to solving the split feasibility problem in Hilbert spaces.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer