(ProQuest: ... denotes non-US-ASCII text omitted.)
Gaohang Yu 1 and Shanzhou Niu 2 and Jianhua Ma 2 and Yisheng Song 3
Recommended by Guoyin Li
1, School of Mathematics and Computer Sciences, Gannan Normal University, Ganzhou 341000, China
2, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
3, Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong
Received 21 February 2013; Accepted 10 April 2013
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Considering the problem to find solutions of the following nonlinear monotone equations: [figure omitted; refer to PDF] where g : [real] n [arrow right] [real] n is a continuous and monotone, that is, Y9; g ( x ) - g ( y ) , x - y YA; ...5; 0 for all x , y ∈ [real] n .
Nonlinear monotone equations arise in many practical applications such as ballistic trajectory computation [1] and vibration systems [2], the first-order necessary condition of the unconstrained convex optimization problem, and the subproblems in the generalized proximal algorithms with Bregman distances [3]. Moreover, we can convert some monotone variational inequality into systems of nonlinear monotone equations by means of fixed point maps or normal maps [4] if the underlying function satisfies some coercive conditions. Solodov and Svaiter [5] proposed a projection method for solving (1). A nice property of the projection method is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regularity assumptions. Moreover, Zhang and Zhou [6] presented a spectral gradient projection (SG) method for solving systems of monotone equations which combines a modified spectral gradient method and projection method. This method is shown to be globally convergent if the nonlinear monotone equations is Lipschitz continuous. Xiao et al. [7] proposed a spectral gradient method to minimize a nonsmooth minimization problem, arising from spare solution recovery in compressed sensing, consisting of a least-squares data-fitting term and a [cursive l] 1 -norm regularization term. This problem is firstly formulated as a convex quadratic program (QP) problem and then reformulated to an equivalent nonlinear monotone equation. Furthermore, Yin et al. [8] developed a nonlinear conjugate gradient method for [cursive l] 1 -norm regularization problems in compressed sensing. Yu [9, 10] extended the spectral gradient method and conjugate gradient-type method to solve large-scale nonlinear system of equations, respectively. Recently, the authors in [11] proposed a multivariate spectral gradient projection method for solving nonlinear monotone equations with convex constraints. Numerical results show that multivariate spectral gradient method (MSG) could improve its performance very well.
Following this line, based on multivariate spectral gradient method (MSG), we present an adaptive prediction-correction method for solving nonlinear monotone equations (1) in the next section. Its global convergence result is established, which is independent of the merit function and Lipschitz continuity. Section 3 presents some numerical experiments to demonstrate and test its practical performance on compressed sensing and image deconvolution problems. Finally, we have a conclusion section.
2. Adaptive Prediction-Correction Method
Considering the projection method [5] for solving nonlinear monotone equations (1), suppose that we have obtained a direction d k . By performing some kind of line search procedure along the direction d k , a point z k = x k + α k d k can be computed such that [figure omitted; refer to PDF] By the monotonicity of g , for any x - such that g ( x - ) = 0 , we have [figure omitted; refer to PDF] Thus, the hyperplane [figure omitted; refer to PDF] strictly separates the current iterate x k from solutions of the systems of monotone equations. Once we get the separating hyperplane, the next iterate x k + 1 is computed by projecting x k on it.
Recalling the multivariate spectral gradient (MSG) method [12] for minimization problem min { f ( x ) |" x ∈ [real] n } , its iterative formula is defined by x k + 1 = x k - diag { 1 / λ k 1 , 1 / λ k 2 , ... , 1 / λ k n } g k , where g k is the gradient of f at x k and diag { λ k 1 , λ k 2 , ... , λ k n } is obtained by minimizing [figure omitted; refer to PDF] with respect to { λ i } i = 1 n , where s k - 1 = x k - x k - 1 , y k = g k - g k - 1 . In particular, when f ( x ) has positive definite diagonal Hessian matrix, multivariate spectral gradient method will be convergent quadratically [12].
Let the i th column of y k and s k denoted by s k i and y k i , respectively. Combining multivariate spectral gradient method with projection scheme, we can present an adaptive prediction-correction method for solving monotone equations (1) as follows.
Algorithm 1 (multivariate spectral gradient (MSG) method).
Given x 0 ∈ [real] n , β ∈ ( 0,1 ) , σ ∈ ( 0,1 ) , 0 < [varepsilon] < 1 , r ...5; 0 , δ > 0 . Set k = 0 .
Step 1.
If || g k || = 0 , stop.
Step 2.
(a) If k = 0 , set d k = - g ( x k ) .
(b) else if y k - 1 i / s k - 1 i > 0 , then set λ k i = y k - 1 i / s k - 1 i ; otherwise set λ k i = ( s k - 1 T y k - 1 ) / ( s k - 1 T s k - 1 ) for i = 1,2 , ... , n , where s k - 1 = x k - x k - 1 , y k - 1 = g ( x k ) - g ( x k - 1 ) + r s k - 1 .
(c) else if λ k i ...4; [varepsilon] or λ k i ...5; 1 / [varepsilon] , set λ k i = δ for i = 1,2 , ... , n .
Set d k = - diag { 1 / λ k 1 , 1 / λ k 2 , ... , 1 / λ k n } g k .
Step 3 (prediction step).
Compute step length α k , set z k = x k + α k d k , where α k = β m k with m k being the smallest nonnegative integer m such that [figure omitted; refer to PDF]
Step 4 (correction step).
Compute [figure omitted; refer to PDF]
Step 5.
Set k = k + 1 and go to Step 1.
By using multivariate spectral gradient method, we obtain prediction sequence { z k } , and then we get correction sequence { x k } via projection. It follows from (17) that x k + 1 will be more close to the solution x * than x k , that is, the sequence { x k } makes progress iterate by iterate. From Step 2(c), we have [figure omitted; refer to PDF]
In what follows, we assume that g ( x k ) ...0; 0 for all k ...5; 0 ; otherwise we have got the solution of the problem (1). The following lemma states that Algorithm 1 is well defined.
Lemma 2.
There exists a nonnegative number m k satisfying (6) for all k ...5; 0 .
Proof.
Suppose that there exists a k 0 ...5; 0 such that (6) is not satisfied for any nonnegative integer m , that is, [figure omitted; refer to PDF] Let m [arrow right] ∞ and using the continuity of g yields [figure omitted; refer to PDF] From Steps 1, 2, and 5, we have [figure omitted; refer to PDF] Thus, [figure omitted; refer to PDF] The last inequality contradicts (10). Hence the statement is proved.
Lemma 3.
Let { x k } and { z k } be any sequence generated by Algorithm 1. Suppose that g is monotone and that the solution set of (1) is not empty, then { x k } and { z k } are both bounded. Furthermore, it holds that [figure omitted; refer to PDF]
Proof.
From (6), we have [figure omitted; refer to PDF] Let x * be an arbitrary point such that g ( x * ) = 0 . Taking account of the monotonicity of g , we have [figure omitted; refer to PDF] From (7), (14), and (16), it follows that [figure omitted; refer to PDF] Hence the sequence { || x k - x * || } is decreasing and convergent; moreover, the sequence { || x k || } is bounded. Since the g is continuous, there exists a constant C > 0 such that [figure omitted; refer to PDF] By the Cauchy-Schwarz inequality, the monotonicity of g and (15), we have [figure omitted; refer to PDF] From (18) and (19), we obtain that { z k } is also bounded. It follows from (17) and (18) that [figure omitted; refer to PDF] which implies [figure omitted; refer to PDF] From (7), using the Cauchy-Schwarz inequality, we obtain that [figure omitted; refer to PDF] Thus lim k [arrow right] ∞ || x k + 1 - x k || = 0 .
The proof is complete.
Now we can establish the global convergence of Algorithm 1.
Theorem 4.
Let x k be generated by Algorithm 1; then { x k } converges to an x - such that g ( x - ) = 0 .
Proof.
Since z k = x k + α k d k , it follows from Lemma 3 that [figure omitted; refer to PDF] From (8) and (18), it holds that { d k } is bounded.
Now we consider the following two possible cases:
(i) lim inf k [arrow right] ∞ || d ( x k ) || = 0 .
(ii) lim inf k [arrow right] ∞ || d ( x k ) || > 0 .
If (i) holds, from (8), we have lim inf k [arrow right] ∞ || g ( x k ) || = 0 . By the continuity of g and the boundedness of { x k } , it is clear that the sequence { x k } has some accumulation point x - such that g ( x - ) = 0 . From (17), we also have that the sequence { || x k - x - || } converges. Therefore, { x k } converges to x - .
If (ii) holds, from (8), we have lim inf k [arrow right] ∞ || g ( x k ) || > 0 . By (23), it holds that [figure omitted; refer to PDF] By the line search rule, we have for all k sufficiently large, m k - 1 will not satisfy (6). This means [figure omitted; refer to PDF] Since the sequences { x k } , { d k } are bounded, we choose a subsequence, let k [arrow right] ∞ in (25), we obtain that [figure omitted; refer to PDF] where x ~ , d ~ are limits of corresponding subsequences. On the other hand, by (8), it holds that [figure omitted; refer to PDF] which contradicts (26). Hence, lim inf k [arrow right] ∞ || d ( x k ) || > 0 is impossible.
The proof is complete.
3. Numerical Experiments
In this section, we report some preliminary numerical experiments to test our algorithms with comparison to spectral gradient projection method [6]. Firstly, in Section 3.1 we test these algorithms on solving nonlinear systems of monotone equations. Secondly, in Section 3.2, we apply HSG-V algorithm to solve [cursive l] 1 -norm regularization problem arising from compressed sensing. All of numerical experiments were performed under Windows XP and MATLAB 7.0 running on a personal computer with an Intel Core 2 Duo CPU at 2.2 GHz and 2 GB of memory.
3.1. Test on Nonlinear Systems of Monotone Equations
We test the performance of our algorithms for solving some monotone equations (see details in the appendix). The termination condition is || g ( x k ) || ...4; 1 0 - 6 . The parameters are specified as follows. For MSG method, we set β = 0.5 , σ = 0.01 , ... = 1 0 - 10 , r = 0.01 . In Step 2, the parameter δ is chosen in the following way: [figure omitted; refer to PDF]
Firstly, we test the performance of the MSG method on the Problem 1 with n = 1000 , the initial point x 0 = ( 1,1 , ... , 1 ) T . Figure 1 displays the performance of MSG method for Problem 1 which indicates that prediction sequences { z k } are better than correction sequences { x k } at most time. Taking this into account, we relax the MSG method such that Step 4 in Algorithm 1 is replaced by the following:
: if mod
: x k + 1 = x k - Y9; g ( z k ) , x k - z k YA; || g ( z k ) || 2 g ( z k ) ,
: eslse
: x k + 1 = z k ,
: end.
(a) Norm of sequence versus iteration for MSG algorithm. (b) MSG-V algorithm versus MSG and SG algorithm, where the iteration has been cut to 100 for the SG algorithm.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
In this case, we refer to this modification as "MSG-V" method. When M ...1; 1 , the above algorithm will reduce to Algorithm 1. The performance of those methods on the Problem (1) is shown in Figure 1, from which we can see that the MSG-V method is preferable quite frequently to the SG method while it also outperforms the MSG method. Furthermore, motivated to accelerate the performance of MSG-V method, we present a hybrid spectral gradient (HSG-V) algorithm. The main idea of the HSG-V algorithm is to run MSG-V algorithm when y k - 1 i / s k - 1 i > 0 for i = 1,2 , ... , n ; otherwise switch to spectral gradient projection (SG) method.
And then we compare the performance of MSG method, MSG-V method, and HSG-V method with the spectral gradient projection (SG) method in [6] on test problems with different initial points. We set β = 0.5 , σ = 0.01 , r = 0.01 in the spectral gradient projection (SG) method in [6], and M = 10 for MSG-V method and HSG-V method.
Numerical results are shown in Tables 1, 2, 3, 4, 5, and 6 with the form NI/NF/T/BK, where we report the dimension of the problem ( n ), the initial points (Init), the number of iteration (NI), the number of function evaluations (NF), and the CPU time (Time) in seconds and the number of backtracking (BK). The symbol "F" denotes that the method fails for this test problem, or the number of the iterations is greater than 10000.
Table 1: Numerical results for SG/MSG methods on Problem 1.
Init ( n ) | SG | MSG |
NI/NF/Time/BK | NI/NF/Time/BK | |
x 1 (100) | 7935/56267/7.375/6 | 1480/9774/1.609/1 |
x 2 (100) | 4365/25627/3.125/2 | 981/6223/0.906/1 |
x 3 (100) | 3131/18028/2.11/7 | 1139/8240/1.266/1 |
x 4 (100) | 2287/13025/1.453/4 | 294/1091/0.172/1 |
x 5 (100) | 1685/9535/1.188/3 | 212/640/0.093/1 |
x 6 (100) | 1788/10238/1.156/3 | 243/745/0.11/1 |
x 7 (100) | 1608/9236/1.047/2 | 220/664/0.109/1 |
x 8 (100) | 1629/9283/1.172/4 | 185/558/0.078/1 |
x 9 (100) | 1478/8407/0.953/4 | 8/20/0.016/0 |
x 10 (100) | 1611/9131/1.031/3 | 184/555/0.078/1 |
x 11 (100) | 1475/8404/0.938/4 | 39/99/0.016/0 |
x 12 (100) | 1226/6938/0.797/5 | 19/46/0.016/0 |
x 1 (200) | F | 2846/20922/6.328/1 |
x 2 (200) | 8506/50896/11.985/4 | 1535/11707/3.5/1 |
x 3 (200) | 6193/37063/8.687/7 | 1826/15256/4.5/0 |
x 4 (200) | 4563/27333/6.5/7 | 266/1055/0.312/1 |
x 5 (200) | 3343/19760/5.078/4 | 376/1133/0.422/1 |
x 6 (200) | 3620/21536/6.11/7 | 200/617/0.172/1 |
x 7 (200) | 3249/19340/4.531/6 | 148/444/0.125/0 |
x 8 (200) | 3253/19383/4.5/5 | 323/973/0.344/1 |
x 9 (200) | 2974/17649/4.109/4 | 8/21/0.015/0 |
x 10 (200) | 3256/19214/5.062/4 | 308/928/0.391/1 |
x 11 (200) | 2995/17784/5.266/4 | 42/110/0.047/0 |
x 12 (200) | 2483/14698/3.453/3 | 27/63/0.047/0 |
x 1 (300) | F | F |
x 2 (300) | F | 3374/29158/12.782/1 |
x 3 (300) | 9334/57185/20.985/2 | 1293/10136/4.656/1 |
x 4 (300) | 6734/41348/14.735/4 | 406/1601/0.687/1 |
x 5 (300) | 5011/30857/11.265/7 | 235/706/0.282/1 |
x 6 (300) | 5380/33167/11.875/6 | 300/919/0.546/1 |
x 7 (300) | 4812/29977/10.657/6 | 187/559/0.235/0 |
x 8 (300) | 4825/29770/10.656/4 | 158/466/0.187/0 |
x 9 (300) | 4396/27551/10.062/3 | 8/21/0.016/0 |
x 10 (300) | 4774/29731/10.969/3 | 203/610/0.266/1 |
x 11 (300) | 4411/27366/9.859/8 | 52/144/0.062/0 |
x 12 (300) | 3656/23021/8.36/6 | 32/75/0.031/0 |
x 1 (500) | F | F |
x 2 (500) | F | 4915/46911/37.906/1 |
x 3 (500) | F | 1754/15152/12.375/1 |
x 4 (500) | F | 489/1905/1.547/1 |
x 5 (500) | 8360/51414/37.485/6 | 269/803/0.797/1 |
x 6 (500) | 8996/56185/39.75/6 | 295/900/0.734/1 |
x 7 (500) | 8128/50525/35.703/2 | 256/766/0.672/1 |
x 8 (500) | 8089/50688/36.484/4 | 387/1163/1.156/0 |
x 9 (500) | 7453/46083/33.75/2 | 8/22/0.016/0 |
x 10 (500) | 8118/50185/35.985/4 | 403/1211/1.187/1 |
x 11 (500) | 7525/46275/33.14/4 | 55/149/0.11/0 |
x 12 (500) | 6235/38408/28.047/9 | 38/95/0.11/0 |
x 1 (1000) | F | F |
x 2 (1000) | F | 8998/93619/200.78/0 |
x 3 (1000) | F | 2939/26376/55.86/0 |
x 4 (1000) | F | 783/3016/6.438/0 |
x 5 (1000) | F | 364/1085/2.75/0 |
x 6 (1000) | F | 429/1309/3.266/0 |
x 7 (1000) | F | 499/1500/3.687/1 |
x 8 (1000) | F | 481/1444/3.969/0 |
x 9 (1000) | F | 8/23/0.047/0 |
x 10 (1000) | F | 346/1032/2.515/0 |
x 11 (1000) | F | 74/201/0.391/0 |
x 12 (1000) | F | 50/124/0.234/0 |
Table 2: Numerical results for MSG-V/HSG-V methods on Problem 1.
Init ( n ) | MSG-V | HSG-V |
NI/NF/Time/BK | NI/NF/Time/BK | |
x 1 (100) | 48/162/0.031/0 | 139/471/0.14/0 |
x 2 (100) | 37/120/0.016/0 | 165/572/0.079/7 |
x 3 (100) | 29/93/0.015/0 | 158/522/0.062/2 |
x 4 (100) | 22/61/0.016/0 | 134/461/0.063/0 |
x 5 (100) | 8/22/0.016/0 | 8/22/0.015/0 |
x 6 (100) | 9/26/0.015/0 | 9/26/0.015/0 |
x 7 (100) | 8/22/0.016/0 | 8/22/0.016/0 |
x 8 (100) | 8/21/0.015/0 | 8/21/0.016/0 |
x 9 (100) | 8/20/0.015/0 | 8/20/0.016/0 |
x 10 (100) | 24/75/0.031/1 | 24/75/0.031/1 |
x 11 (100) | 8/21/0.015/0 | 8/21/0.016/0 |
x 12 (100) | 6/16/0.016/0 | 6/16/0.016/0 |
x 1 (200) | 43/134/0.047/0 | 174/587/0.172/0 |
x 2 (200) | 39/142/0.047/0 | 184/640/0.172/0 |
x 3 (200) | 35/118/0.031/1 | 205/693/0.188/0 |
x 4 (200) | 19/59/0.031/0 | 148/519/0.125/0 |
x 5 (200) | 8/23/0.296/0 | 8/23/0.015/0 |
x 6 (200) | 9/27/0.016/0 | 9/27/0.016/0 |
x 7 (200) | 8/23/0.016/0 | 8/23/0.016/0 |
x 8 (200) | 8/22/0.015/0 | 8/22/0.016/0 |
x 9 (200) | 8/21/0.016/0 | 8/21/0.015/0 |
x 10 (200) | 25/79/0.046/1 | 25/79/0.031/1 |
x 11 (200) | 8/22/0.016/0 | 8/22/0.016/0 |
x 12 (200) | 6/17/0.015/0 | 6/17/0.016/0 |
x 1 (300) | 50/200/0.094/0 | 246/865/0.375/4 |
x 2 (300) | 56/196/0.093/1 | 265/977/0.375/8 |
x 3 (300) | 33/119/0.047/0 | 345/1253/0.515/7 |
x 4 (300) | 28/90/0.031/0 | 244/825/0.329/0 |
x 5 (300) | 8/23/0.016/0 | 8/23/0.015/0 |
x 6 (300) | 9/27/0.015/0 | 9/27/0.016/0 |
x 7 (300) | 8/23/0.016/0 | 8/23/0.015/0 |
x 8 (300) | 8/22/0.016/0 | 8/22/0.016/0 |
x 9 (300) | 8/21/0.016/0 | 8/21/0.016/0 |
x 10 (300) | 26/82/0.031/1 | 26/82/0.031/1 |
x 11 (300) | 8/23/0.016/0 | 8/23/0.016/0 |
x 12 (300) | 6/18/0.015/0 | 6/18/0.015/0 |
x 1 (500) | 51/218/0.188/0 | 290/1054/0.765/8 |
x 2 (500) | 41/132/0.125/0 | 330/1159/0.953/0 |
x 3 (500) | 47/164/0.14/1 | 383/1394/0.969/0 |
x 4 (500) | 28/91/0.125/0 | 246/864/0.609/0 |
x 5 (500) | 9/26/0.047/0 | 9/26/0.032/0 |
x 6 (500) | 9/28/0.031/0 | 9/28/0.015/0 |
x 7 (500) | 9/26/0.047/0 | 9/26/0.032/0 |
x 8 (500) | 8/23/0.032/0 | 8/23/0.015/0 |
x 9 (500) | 8/22/0.031/0 | 8/22/0.016/0 |
x 10 (500) | 27/86/0.062/1 | 27/86/0.062/1 |
x 11 (500) | 8/23/0.016/0 | 8/23/0.032/0 |
x 12 (500) | 6/19/0.016/0 | 6/19/0.015/0 |
x 1 (1000) | 49/205/0.547/0 | 437/1656/3.094/2 |
x 2 (1000) | 41/138/0.344/0 | 506/1824/3.406/0 |
x 3 (1000) | 49/181/0.437/1 | 607/2137/4.094/2 |
x 4 (1000) | 37/121/0.282/1 | 426/1582/3/0 |
x 5 (1000) | 9/27/0.062/0 | 9/27/0.062/0 |
x 6 (1000) | 9/29/0.063/0 | 86/308/0.579/6 |
x 7 (1000) | 9/27/0.046/0 | 9/27/0.046/0 |
x 8 (1000) | 8/24/0.063/0 | 8/24/0.047/0 |
x 9 (1000) | 8/23/0.047/0 | 12/44/0.078/2 |
x 10 (1000) | 29/93/0.172/1 | 29/93/0.188/1 |
x 11 (1000) | 8/24/0.047/0 | 8/24/0.078/0 |
x 12 (1000) | 6/20/0.031/0 | 6/20/0.078/0 |
Table 3: Numerical results for SG/MSG methods on Problem 2.
Init ( n ) | SG | MSG |
NI/NF/Time/BK | NI/NF/Time/BK | |
x 1 (100) | F | 349/712/0.094/0 |
x 2 (100) | F | 352/711/0.094/0 |
x 3 (100) | F | 351/711/0.094/0 |
x 4 (100) | F | 353/717/0.109/0 |
x 5 (100) | F | 346/696/0.094/0 |
x 6 (100) | F | 347/698/0.078/0 |
x 7 (100) | F | 345/694/0.109/0 |
x 8 (100) | F | 320/644/0.078/0 |
x 9 (100) | 359/719/0.031/0 | 359/719/0.047/0 |
x 10 (100) | 22/67/0.015/1 | 22/67/0.015/1 |
x 11 (100) | 434/869/0.047/0 | 434/869/0.063/0 |
x 12 (100) | 424/849/0.031/0 | 424/849/0.047/0 |
x 1 (200) | F | 437/888/0.218/0 |
x 2 (200) | F | 439/885/0.219/0 |
x 3 (200) | F | 438/885/0.219/0 |
x 4 (200) | F | 440/891/0.219/0 |
x 5 (200) | F | 433/870/0.343/0 |
x 6 (200) | F | 434/872/0.203/0 |
x 7 (200) | F | 432/868/0.219/0 |
x 8 (200) | F | 358/720/0.172/0 |
x 9 (200) | 372/745/0.047/0 | 371/743/0.063/0 |
x 10 (200) | 22/66/0.015/0 | 23/70/0.015/1 |
x 11 (200) | 544/1089/0.063/0 | 544/1089/0.094/0 |
x 12 (200) | 534/1069/0.047/0 | 534/1069/0.109/0 |
x 1 (300) | F | 498/1010/0.375/0 |
x 2 (300) | F | 500/1007/0.375/0 |
x 3 (300) | F | 500/1009/0.375/0 |
x 4 (300) | F | 501/1013/0.36/0 |
x 5 (300) | F | 494/992/0.375/0 |
x 6 (300) | F | 495/994/0.375/0 |
x 7 (300) | F | 494/992/0.359/0 |
x 8 (300) | F | 368/740/0.266/0 |
x 9 (300) | 373/747/0.047/0 | 373/747/0.093/0 |
x 10 (300) | 22/66/0.015/0 | 23/70/0.016/1 |
x 11 (300) | 621/1243/0.078/0 | 621/1243/0.157/0 |
x 12 (300) | 611/1223/0.078/0 | 611/1223/0.25/0 |
x 1 (500) | F | 588/1190/0.781/0 |
x 2 (500) | F | 590/1187/0.781/0 |
x 3 (500) | F | 589/1187/0.781/0 |
x 4 (500) | F | 591/1193/0.797/0 |
x 5 (500) | F | 584/1172/0.782/0 |
x 6 (500) | F | 585/1174/0.906/0 |
x 7 (500) | F | 583/1170/0.797/0 |
x 8 (500) | F | 372/748/0.468/0 |
x 9 (500) | 373/747/0.078/0 | 373/747/0.125/0 |
x 10 (500) | 22/66/0.015/0 | 23/70/0.016/1 |
x 11 (500) | 734/1469/0.141/0 | 734/1469/0.234/0 |
x 12 (500) | 724/1449/0.14/0 | 724/1449/0.25/0 |
x 1 (1000) | F | 736/1486/2.563/0 |
x 2 (1000) | F | 739/1485/2.406/0 |
x 3 (1000) | F | 738/1485/2.578/0 |
x 4 (1000) | F | 740/1491/2.407/0 |
x 5 (1000) | F | 733/1470/2.562/0 |
x 6 (1000) | F | 734/1472/2.531/0 |
x 7 (1000) | F | 732/1468/2.407/0 |
x 8 (1000) | F | 372/748/1.125/0 |
x 9 (1000) | 373/747/0.125/0 | 373/747/0.343/0 |
x 10 (1000) | 24/73/0.016/1 | 24/73/0.032/1 |
x 11 (1000) | 921/1843/0.281/0 | 921/1843/0.562/0 |
x 12 (1000) | 912/1825/0.266/0 | 912/1825/0.547/0 |
Table 4: Numerical results for MSG-V/HSG-V methods on Problem 2.
Init ( n ) | MSG-V | HSG-V |
NI/NF/Time/BK | NI/NF/Time/BK | |
x 1 (100) | 27/82/0.015/1 | 27/82/0.016/1 |
x 2 (100) | 435/872/0.047/0 | 435/872/0.094/0 |
x 3 (100) | 416/838/0.047/0 | 416/838/0.062/0 |
x 4 (100) | 434/872/0.047/0 | 434/872/0.047/0 |
x 5 (100) | 424/850/0.047/0 | 424/850/0.047/0 |
x 6 (100) | 347/697/0.047/0 | 347/697/0.047/0 |
x 7 (100) | 346/695/0.047/0 | 346/695/0.032/0 |
x 8 (100) | 318/640/0.078/0 | 318/640/0.062/0 |
x 9 (100) | 355/711/0.031/0 | 355/711/0.031/0 |
x 10 (100) | 22/67/0.016/1 | 22/67/0.016/1 |
x 11 (100) | 434/869/0.063/0 | 434/869/0.062/0 |
x 12 (100) | 424/849/0.046/0 | 424/849/0.047/0 |
x 1 (200) | 27/82/0.015/1 | 27/82/0.016/1 |
x 2 (200) | 545/1092/0.094/0 | 545/1092/0.078/0 |
x 3 (200) | 525/1056/0.094/0 | 525/1056/0.078/0 |
x 4 (200) | 544/1092/0.094/0 | 544/1092/0.078/0 |
x 5 (200) | 533/1068/0.093/0 | 533/1068/0.078/0 |
x 6 (200) | 435/873/0.063/0 | 435/873/0.078/0 |
x 7 (200) | 433/869/0.078/0 | 433/869/0.063/0 |
x 8 (200) | 355/714/0.172/0 | 356/716/0.078/0 |
x 9 (200) | 367/735/0.062/0 | 367/735/0.047/0 |
x 10 (200) | 23/70/0.015/1 | 23/70/0.016/1 |
x 11 (200) | 544/1089/0.094/0 | 544/1089/0.078/0 |
x 12 (200) | 534/1069/0.094/0 | 534/1069/0.078/0 |
x 1 (300) | 28/85/0.016/1 | 28/85/0.016/1 |
x 2 (300) | 622/1246/0.14/0 | 622/1246/0.11/0 |
x 3 (300) | 602/1210/0.156/0 | 602/1210/0.125/0 |
x 4 (300) | 621/1246/0.25/0 | 621/1246/0.109/0 |
x 5 (300) | 610/1222/0.125/0 | 610/1222/0.125/0 |
x 6 (300) | 496/995/0.11/0 | 496/995/0.094/0 |
x 7 (300) | 494/991/0.125/0 | 494/991/0.094/0 |
x 8 (300) | 365/734/0.25/0 | 365/734/0.109/0 |
x 9 (300) | 368/737/0.094/0 | 368/737/0.063/0 |
x 10 (300) | 23/70/0.031/1 | 23/70/0.015/1 |
x 11 (300) | 621/1243/0.141/0 | 621/1243/0.11/0 |
x 12 (300) | 611/1223/0.125/0 | 611/1223/0.125/0 |
x 1 (500) | 28/85/0.015/1 | 28/85/0.015/1 |
x 2 (500) | 735/1472/0.25/0 | 735/1472/0.203/0 |
x 3 (500) | 715/1436/0.219/0 | 715/1436/0.203/0 |
x 4 (500) | 734/1472/0.25/0 | 734/1472/0.188/0 |
x 5 (500) | 723/1448/0.219/0 | 723/1448/0.187/0 |
x 6 (500) | 586/1175/0.187/0 | 586/1175/0.156/0 |
x 7 (500) | 584/1171/0.204/0 | 584/1171/0.141/0 |
x 8 (500) | 369/742/0.453/0 | 369/742/0.156/0 |
x 9 (500) | 368/737/0.125/0 | 368/737/0.094/0 |
x 10 (500) | 23/70/0.015/1 | 23/70/0.015/1 |
x 11 (500) | 734/1469/0.219/0 | 734/1469/0.203/0 |
x 12 (500) | 724/1449/0.234/0 | 724/1449/0.235/0 |
x 1 (1000) | 28/85/0.016/1 | 28/85/0.047/1 |
x 2 (1000) | 923/1848/0.531/0 | 923/1848/0.485/0 |
x 3 (1000) | 902/1810/0.641/0 | 902/1810/0.437/0 |
x 4 (1000) | 922/1848/0.516/0 | 922/1848/0.453/0 |
x 5 (1000) | 911/1824/0.531/0 | 911/1824/0.453/0 |
x 6 (1000) | 737/1477/0.406/0 | 737/1477/0.344/0 |
x 7 (1000) | 733/1469/0.422/0 | 733/1469/0.344/0 |
x 8 (1000) | 369/742/1.125/0 | 369/742/0.297/0 |
x 9 (1000) | 368/737/0.219/0 | 368/737/0.172/0 |
x 10 (1000) | 24/73/0.015/1 | 24/73/0.015/1 |
x 11 (1000) | 921/1843/0.625/0 | 921/1843/0.438/0 |
x 12 (1000) | 912/1825/0.563/0 | 912/1825/0.453/0 |
Table 5: Numerical results for SG/MSG methods on Problem 3.
Init ( n ) | SG | MSG |
NI/NF/Time/BK | NI/NF/Time/BK | |
x 1 (100) | 161/753/0.094/2 | 246/1062/0.296/3 |
x 2 (100) | 103/424/0.062/3 | 185/794/0.219/2 |
x 3 (100) | 118/517/0.063/2 | 204/935/0.266/5 |
x 4 (100) | 63/224/0.031/1 | 194/894/0.25/2 |
x 5 (100) | 63/234/0.031/2 | 149/662/0.187/3 |
x 6 (100) | 81/307/0.047/2 | 191/831/0.235/1 |
x 7 (100) | 64/237/0.031/2 | 168/738/0.203/1 |
x 8 (100) | 53/194/0.032/2 | 94/378/0.109/1 |
x 9 (100) | 53/192/0.015/2 | 178/808/0.219/3 |
x 10 (100) | 76/288/0.047/2 | 229/1008/0.281/1 |
x 11 (100) | 73/273/0.031/2 | 203/924/0.25/4 |
x 12 (100) | 60/225/0.016/3 | 195/888/0.266/1 |
x 1 (200) | 216/1203/0.468/2 | 251/1129/0.515/2 |
x 2 (200) | 147/728/0.282/2 | 168/702/0.407/3 |
x 3 (200) | 157/759/0.312/2 | 180/821/0.422/1 |
x 4 (200) | 58/206/0.094/3 | 214/956/0.453/1 |
x 5 (200) | 64/238/0.094/1 | 174/793/0.359/2 |
x 6 (200) | 78/294/0.125/2 | 194/882/0.422/2 |
x 7 (200) | 44/156/0.062/2 | 170/757/0.359/1 |
x 8 (200) | 48/173/0.078/1 | 93/368/0.172/1 |
x 9 (200) | 54/192/0.078/2 | 191/890/0.391/2 |
x 10 (200) | 84/314/0.125/3 | 152/627/0.282/1 |
x 11 (200) | 61/222/0.094/2 | 193/903/0.422/2 |
x 12 (200) | 63/236/0.093/3 | 121/531/0.25/2 |
x 1 (300) | 541/5455/20.282/3 | 247/1066/4.187/5 |
x 2 (300) | 142/724/2.734/2 | 135/512/2.063/1 |
x 3 (300) | 195/1084/4.031/2 | 197/892/3.5/1 |
x 4 (300) | 55/196/0.734/2 | 193/868/3.328/2 |
x 5 (300) | 68/254/0.953/2 | 154/693/2.703/4 |
x 6 (300) | 77/295/1.11/2 | 241/1168/4.484/2 |
x 7 (300) | 76/281/1.094/2 | 186/851/3.313/5 |
x 8 (300) | 57/207/0.765/2 | 127/528/2/1 |
x 9 (300) | 59/210/0.797/1 | 190/939/3.578/1 |
x 10 (300) | 91/342/1.266/3 | 142/664/2.563/1 |
x 11 (300) | 79/288/1.109/3 | 168/767/2.906/1 |
x 12 (300) | 72/266/0.984/1 | 114/448/1.75/1 |
x 1 (500) | 488/4021/42.907/1 | 212/927/9.985/1 |
x 2 (500) | 240/1440/15.343/3 | 166/712/7.64/4 |
x 3 (500) | 254/1544/16.485/1 | 228/1050/11.313/2 |
x 4 (500) | 59/210/2.266/3 | 273/1564/16.843/3 |
x 5 (500) | 70/261/2.75/2 | 189/905/9.781/2 |
x 6 (500) | 82/313/3.328/2 | 190/866/9.266/1 |
x 7 (500) | 76/284/3.078/3 | 149/642/6.984/1 |
x 8 (500) | 59/215/2.282/2 | 97/381/4.141/1 |
x 9 (500) | 51/185/1.985/2 | 439/2832/30.391/3 |
x 10 (500) | 84/319/3.421/2 | 66/238/2.562/1 |
x 11 (500) | 77/280/2.969/2 | 74/266/2.891/1 |
x 12 (500) | 67/250/2.719/3 | 57/189/2.078/1 |
x 1 (1000) | 2780/36160/1510.7/2 | 199/853/35.969/3 |
x 2 (1000) | 331/2242/94.734/2 | 160/656/27.656/1 |
x 3 (1000) | 352/2532/106.25/1 | 197/891/37.359/2 |
x 4 (1000) | 71/260/10.891/1 | 182/794/33.985/2 |
x 5 (1000) | 71/268/11.234/2 | 190/891/37.546/3 |
x 6 (1000) | 84/319/13.422/3 | 160/698/29.188/4 |
x 7 (1000) | 73/271/11.391/2 | 157/663/27.547/1 |
x 8 (1000) | 50/182/7.578/2 | 112/446/18.641/4 |
x 9 (1000) | 49/173/7.328/2 | 232/1162/48.656/1 |
x 10 (1000) | 85/318/13.375/2 | 56/172/7.391/1 |
x 11 (1000) | 81/297/12.485/3 | 61/185/7.781/1 |
x 12 (1000) | 69/255/10.781/1 | 49/154/6.578/1 |
Table 6: Numerical results for MSG-V/HSG-V methods on Problem 3.
Init ( n ) | MSG-V | HSG-V |
NI/NF/Time/BK | NI/NF/Time/BK | |
x 1 (100) | 92/377/0.078/1 | 55/172/0.047/6 |
x 2 (100) | 93/374/0.079/1 | 59/177/0.031/0 |
x 3 (100) | 86/360/0.078/1 | 40/120/0.031/0 |
x 4 (100) | 83/363/0.062/1 | 40/111/0.032/1 |
x 5 (100) | 45/173/0.047/1 | 20/56/0.015/0 |
x 6 (100) | 57/204/0.047/2 | 42/121/0.016/2 |
x 7 (100) | 53/202/0.031/1 | 30/87/0.031/0 |
x 8 (100) | 47/181/0.047/2 | 22/61/0.016/0 |
x 9 (100) | 49/194/0.047/1 | 33/100/0.015/1 |
x 10 (100) | 48/165/0.031/1 | 30/97/0.032/3 |
x 11 (100) | 49/181/0.031/1 | 26/75/0.015/0 |
x 12 (100) | 37/125/0.032/0 | 29/93/0.016/0 |
x 1 (200) | 88/335/0.172/1 | 53/173/0.078/1 |
x 2 (200) | 83/330/0.14/1 | 50/142/0.078/0 |
x 3 (200) | 76/290/0.141/1 | 42/126/0.047/2 |
x 4 (200) | 69/266/0.125/1 | 35/101/0.063/3 |
x 5 (200) | 48/190/0.094/1 | 25/72/0.031/3 |
x 6 (200) | 69/294/0.156/2 | 34/99/0.047/0 |
x 7 (200) | 55/215/0.109/5 | 30/81/0.047/0 |
x 8 (200) | 51/217/0.11/1 | 22/61/0.031/0 |
x 9 (200) | 46/153/0.078/1 | 24/64/0.031/0 |
x 10 (200) | 42/129/0.078/1 | 33/91/0.031/0 |
x 11 (200) | 49/180/0.094/0 | 30/92/0.047/1 |
x 12 (200) | 37/125/0.062/2 | 30/85/0.047/0 |
x 1 (300) | 89/322/1.266/1 | 53/168/0.625/0 |
x 2 (300) | 91/348/1.282/1 | 55/162/0.609/1 |
x 3 (300) | 72/278/1.046/1 | 37/109/0.422/0 |
x 4 (300) | 78/339/1.297/1 | 28/81/0.297/1 |
x 5 (300) | 45/178/0.735/2 | 20/55/0.281/0 |
x 6 (300) | 63/248/0.937/3 | 38/113/0.453/2 |
x 7 (300) | 53/208/0.797/1 | 33/91/0.344/0 |
x 8 (300) | 63/295/1.109/1 | 22/61/0.234/0 |
x 9 (300) | 45/177/0.672/1 | 27/71/0.266/0 |
x 10 (300) | 45/148/0.641/1 | 40/108/0.406/0 |
x 11 (300) | 43/144/0.531/2 | 32/89/0.344/1 |
x 12 (300) | 36/129/0.5/0 | 28/83/0.312/2 |
x 1 (500) | 85/289/3.125/1 | 50/158/1.672/0 |
x 2 (500) | 78/245/2.625/0 | 56/157/1.75/2 |
x 3 (500) | 74/289/3.156/2 | 40/118/1.235/1 |
x 4 (500) | 62/223/2.375/1 | 47/131/1.437/1 |
x 5 (500) | 49/194/2.11/1 | 25/74/0.813/3 |
x 6 (500) | 55/181/1.922/2 | 40/116/1.218/0 |
x 7 (500) | 49/163/1.797/1 | 26/74/0.781/0 |
x 8 (500) | 53/219/2.328/1 | 22/61/0.657/0 |
x 9 (500) | 49/184/2/0 | 25/69/0.781/2 |
x 10 (500) | 41/158/1.656/2 | 35/99/1.062/6 |
x 11 (500) | 48/173/1.922/1 | 31/83/0.891/0 |
x 12 (500) | 35/126/1.359/2 | 27/76/0.812/0 |
x 1 (1000) | 97/376/15.688/3 | 50/165/6.922/3 |
x 2 (1000) | 78/263/11.015/1 | 52/151/6.235/2 |
x 3 (1000) | 80/303/12.766/1 | 44/128/5.343/1 |
x 4 (1000) | 73/291/12.219/0 | 36/101/4.235/0 |
x 5 (1000) | 43/165/6.968/1 | 22/66/2.75/2 |
x 6 (1000) | 59/213/9.094/4 | 44/120/5.016/0 |
x 7 (1000) | 55/201/8.438/1 | 27/78/3.281/3 |
x 8 (1000) | 57/250/10.5/1 | 22/60/2.547/0 |
x 9 (1000) | 46/166/7/0 | 29/80/3.266/4 |
x 10 (1000) | 39/126/5.234/0 | 31/87/3.64/0 |
x 11 (1000) | 47/165/6.969/1 | 35/96/4.032/3 |
x 12 (1000) | 36/114/4.797/2 | 34/91/3.812/0 |
As we can see from Tables 1-6 that the HSG-V algorithm is preferable quite frequently to the SG method and also outperforms the MSG algorithm and MSG-V algorithm, since it can solve about 80 % and 70 % of the problems with the best time and the smallest number of function evaluations, respectively. We also find that the SG algorithm seems more sensitive to the initial points.
Figure 2 shows the performance of these algorithms relative to the number of function evaluations and CPU time, respectively, which were evaluated using the profiles of Dolan and Moré [13]. That is, for each algorithm, we plot the fraction P of problems for which the method is within a factor t of the smallest number of function evaluations/CPU time. Clearly, the left side of the figure gives the percentage of the test problems for which a method is the best one according to the number of function evaluations or CPU time, respectively. As we can see from Figure 2, "HSG-V" algorithm has the best performance.
(a) Performance profiles for the number of function evaluations. (b) Performance profiles for the CPU time.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
3.2. Test on [cursive l] 1 -Norm Regularization Problem in Compressed Sensing
There has been considerable interest in solving the [cursive l] 1 -norm regularized least-square problem [figure omitted; refer to PDF] where A ∈ [real] m × n ( m ...a; n ) is a linear operator, y ∈ [real] m is an observation, and μ is a nonnegative parameter. Equation (29) mainly appears in compressed sensing: an emerging methodology in digital signal processing and has attracted intensive research activities over the past few years. Compressed sensing is based on the fact that if original signal is sparse or approximately sparse in some orthogonal basis, then an exact restoration can be produced by solving (29).
Recently, Figueiredo et al. [14] proposed gradient projection method for sparse reconstruction (GPSR). The first key step of GPSR method is to express (29) as a quadratic program. For any x ∈ [real] n it can be formulated as x = u - v , u ...5; 0 , v ...5; 0 , where u ∈ [real] n , v ∈ [real] n , and u i = ( x i ) + , v i = ( - x i ) + for i = 1,2 , ... , n with ( · ) + = max { 0 , · } . We thus have || x || 1 = e n T u + e n T v , where e n = ( 1,1 , ... , 1 ) T is the vector consisting of n ones. Hence (29) can be rewritten as the following quadratic program: [figure omitted; refer to PDF] Furthermore, from [14], (30) can be written in following form [figure omitted; refer to PDF] where [figure omitted; refer to PDF] It is obvious that B is a positive semidefinite matrix, hence, (30) is a convex QP problem. Figueiredo et al. [14] proposed a gradient projection method with BB step length for solving this problem.
Xiao et al. [7] indicated that the QP problem (30) is equivalent to the linear complementary problem: find z ∈ [real] 2 n such that [figure omitted; refer to PDF] It is obvious that z is a solution of (33) if and only if it is a solution of the following nonlinear systems of equation [figure omitted; refer to PDF] The function g is vector valued, and the "min" is interpreted as componentwise minimum. Xiao et al. [7] proved that g is monotone. Hence, (34) can be solved effectively by the HSG-V algorithm.
Firstly, we consider a typical CS scenario that goal is to reconstruct a length- n sparse signal from m observations. We measure the quality of restoration by means of squared error (MSE) to the original signal x - , that is, [figure omitted; refer to PDF] where x * is the restored signal. We test a small size signal with n = 2 12 , m = 2 10 , and the original contains 2 7 randomly nonzero elements. A is the Gaussian matrix which is generated by command r a n d n ( m , n ) in MATLAB. In this test, the measurement y is usually contaminated by noise, that is, [figure omitted; refer to PDF] where ω is the Gaussian noise distributed as N ( 0,0.0001 ) . The parameters are taken as β = 0.1 , σ = 0.01 , ... = 1 0 - 10 , r = 1.2 , M = 2 , μ is forced in decrease as the measure of [14]. To get better quality estimated signals, the process is terminated when the relative change of the objective function is below 1 0 - 5 , that is, [figure omitted; refer to PDF] where f k denotes the function value at x k .
Figures 3 and 4 report the results of HSG-V for a signal sparse reconstruction from its limited measurement. Comparing the first and last plot in Figure 3, we can find that the original sparse signal is restored almost exactly from the limited measurement. From the right plot in Figure 4, we observe that all the blue dots are circled by the red circles, which shows that the original signal has been found almost exactly. All together, this simple experiment shows that HSG-V algorithms perform well, and it is an efficient method to denoise sparse signals.
(a) Original signal with length 4096 and 128 nonzero elements. (b) The noisy measurement with length 1024. (c) Recovery signal by HSG-V with 232 iterations, 15.16 s CPU time in seconds, and 3.14e-06 error.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(a) Original signal with 128 nonzero elements, (b) noisy measurement, (c) restored signal (red circles) versus original signal (blue dots) with MSE = 6.72 e - 06 , 12.66 CPU time in seconds, and 188 iterations.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
In the next experiment, we compare the performance of our algorithm with the SGCS algorithm for image deconvolution, in which A is a partial DWT matrix whose m rows are chosen randomly from n × n DWT matrix. To measure the quality of restoration, we use the SNR (signal to noise ratio) defined as SNR = 20 log 10 ( || x ... || / || x - x ... || ) . Figure 5 shows the original test images, and Figure 6 shows the restoration results by the SGCS and HSG-V algorithm, respectively. These results show that the HSG-V algorithm can restore blurred image quite well and obtain better quality reconstructed images in an efficient manner.
Figure 5: The original images: cameraman/barbara/bridge.
[figure omitted; refer to PDF]
The blurred image (first column), the restored image by SGCS algorithm (second column), and HSG-V algorithm.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
4. Conclusion
In this paper, we develop an adaptive prediction-correction method for solving nonlinear monotone equations. Under some assumptions, we establish its global convergence. Base on the prediction-correction method, an efficient hybrid spectral gradient (HSG-V) algorithm is proposed, which is composite of MSG-V, algorithm and SG algorithm. Numerical results show that the HSG-V algorithm is preferable and outperforms the MSG, MSG-V and SG algorithm. Moreover, HSG-V algorithm is applied to solve [cursive l] 1 -norm regularized problems arising from sparse signal reconstruction. Numerical experiments show that HSG-V algorithm works well, and it provides an efficient approach for compressed sensing and image deconvolution.
Acknowledgments
This work was partly supported by the National Natural Science Foundation of China (no. 11001060, 61262026, 81000613, 81101046), the JGZX Programme of Jiangxi Province (20112BCB23027), and the Science and Technology Programme of Jiangxi Education Committee (LDJH12088).
[1] J. M. Ortega, W. C. Rheinboldt Iterative Solution of Nonlinear Equations in Several Variables , pp. xx+572, Academic Press, New York, NY, USA, 1970.
[2] E. Zeidler Nonlinear functional analysis and its applications. II/B: Nonlinear monotone operators , Springer, New York, NY, USA, 1990.
[3] A. N. Iusem, M. V. Solodov, "Newton-type methods with generalized distances for constrained optimization," Optimization , vol. 41, no. 3, pp. 257-278, 1997.
[4] Y.-B. Zhao, D. Li, "Monotonicity of fixed point and normal mappings associated with variational inequality and its application," SIAM Journal on Optimization , vol. 11, no. 4, pp. 962-973, 2001.
[5] M. V. Solodov, B. F. Svaiter, "A globally convergent inexact Newton method for systems of monotone equations," Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods , vol. 22, pp. 355-369, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999.
[6] L. Zhang, W. Zhou, "Spectral gradient projection method for solving nonlinear monotone equations," Journal of Computational and Applied Mathematics , vol. 196, no. 2, pp. 478-484, 2006.
[7] Y. Xiao, Q. Wang, Q. Hu, "Non-smooth equations based method for l 1 -norm problems with applications to compressed sensing," Nonlinear Analysis: Theory, Methods & Applications , vol. 74, no. 11, pp. 3570-3577, 2011.
[8] K. Yin, Y. Xiao, M. Zhang, "Nonlinear conjugate gradient method for l 1 -norm regularization problems in compressive sensing," Journal of Computational Information Systems , vol. 7, no. 3, pp. 880-885, 2011.
[9] G. Yu, "A derivative-free method for solving large-scale nonlinear systems of equations," Journal of Industrial and Management Optimization , vol. 6, no. 1, pp. 149-160, 2010.
[10] G. Yu, "Nonmonotone spectral gradient-type methods for large-scale unconstrained optimization and nonlinear systems of equations," Pacific Journal of Optimization , vol. 7, no. 2, pp. 387-404, 2011.
[11] G. Yu, S. Niu, J. Ma, "Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints," Journal of Industrial and Management Optimization , vol. 9, no. 1, pp. 117-129, 2013.
[12] L. Han, G. Yu, L. Guan, "Multivariate spectral gradient method for unconstrained optimization," Applied Mathematics and Computation , vol. 201, no. 1-2, pp. 621-630, 2008.
[13] E. D. Dolan, J. J. Moré, "Benchmarking optimization software with performance profiles," Mathematical Programming , vol. 91, no. 2, pp. 201-213, 2002.
[14] M. A. T. Figueiredo, R. D. Nowak, S. J. Wright, "Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems," IEEE Journal on Selected Topics in Signal Processing , vol. 1, no. 4, pp. 586-597, 2007.
Appendix
The Test Problems
In this appendix, we list the test functions and the associated initial guess as follows.
Problem 1.
g : [real] n [arrow right] [real] n , [figure omitted; refer to PDF] g ( x ) = ( g 1 ( x ) , g 2 ( x ) , ... , g n ( x ) ) T , x = ( x 1 , x 2 , ... , x n ) T .
Problem 2.
g : [real] n [arrow right] [real] n , [figure omitted; refer to PDF] g ( x ) = ( g 1 ( x ) , g 2 ( x ) , ... , g n ( x ) ) T , x = ( x 1 , x 2 , ... , x n ) T .
Problem 3.
g : [real] n [arrow right] [real] n is given by [figure omitted; refer to PDF] f ( x ) = ( e x 1 - 1 , e x 2 - 1 , ... , e x n - 1 ) T and [figure omitted; refer to PDF]
It is noticed that Problems 1 and 3 are smooth at x = 0 , while Problem 2 is nonsmooth (Table 7).
Table 7: The initial points used in our test.
x 1 ( - 20,20 , - 20,20 , ... , - 20,20 , - 20,20 ) T
x 2 ( - 15,15 , - 15,15 , ... , - 15,15 , - 15,15 ) T
x 3 ( - 10,10 , - 10,10 , ... , - 10,10 , - 10,10 ) T
x 4 ( - 5,5 , - 5,5 , ... , - 5,5 , - 5,5 ) T
x 5 ( - 1.2,1 , - 1.2,1 , ... , - 1.2,1 , - 1.2,1 ) T
x 6 ( - 2,1 , - 2,1 , ... , - 2,1 , - 2,1 ) T
x 7 ( - 1,1 , - 1,1 , ... , - 1,1 , - 1,1 ) T
x 8 ( - 1 / 1,1 / 2 , ... , - 1 / ( n - 1 ) ) , 1 / n ) T
x 9 ( 1 / 1,1 / 2 , ... , 1 / ( n - 1 ) ) , 1 / n ) T
x 10 ( - 1 , - 1 , ... , - 1 , - 1 ) T
x 11 ( 1,1 , ... , 1,1 ) T
x 12 ( 0.1,0.1 ... , 0.1,0.1 )
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2013 Gaohang Yu et al. Gaohang Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Combining multivariate spectral gradient method with projection scheme, this paper presents an adaptive prediction-correction method for solving large-scale nonlinear systems of monotone equations. The proposed method possesses some favorable properties: (1) it is progressive step by step, that is, the distance between iterates and the solution set is decreasing monotonically; (2) global convergence result is independent of the merit function and its Lipschitz continuity; (3) it is a derivative-free method and could be applied for solving large-scale nonsmooth equations due to its lower storage requirement. Preliminary numerical results show that the proposed method is very effective. Some practical applications of the proposed method are demonstrated and tested on sparse signal reconstruction, compressed sensing, and image deconvolution problems.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer