Introduction
The minimax problem with nonlinear inequality constraints has the following form,(1)where .
The minimax problem is a specific class of nonsmooth optimization problems. The general optimization methods cannot be applied directly in (1) because the objective function is nondifferentiable. The common approaches for (1) are gradient sampling methods [1, 2], Cutting Plane Method, and bundle method [3].subgradient methods [4].
Smoothing is one of the most popular classes among all methods for solving nonsmooth problems. There are two main approaches proposed by previous scholars to deal with this problem. First, approximating the non-differential function by a smooth exponential function with parameters (which is also called entropy function). Shor [4] proposed two smoothing algorithms with an active set strategy and a new adaptive parameter update rule. Second, introduce an artificial additional variable to transform the problem into an equivalent nonlinear programming with smooth constraints, as follows(2). where are Lipschitz continuously function; .
For (2), there are many algorithms can be used such as gradient projection [5], interior point method [6], trust-region [7], sequential quadratic programming(SQP) [8], penalty function [9], filter methods [10] or QP-free method [11], etc.
The extraordinary efficiency of SQP methods in solving nonlinear constrained optimization problems (NLP) has allowed its extension to many other problems, such as minimax problems [8–13]. But the sequences may fail to converge as the initial point lies far from the optimal point in the SQP algorithm. So penalty function methods were proposed by Courant [13] in 1943 to enhance the convergence of the algorithm. The objective function is defined as the sum of the objective function and penalty term in the penalty function method. In [9], Ma gives an exact smooth penalty function method to solve minimax problems with mixed constraints. However, the choice of penalty parameters during the iterative process is complicated. Meantime, the effectiveness of the penalty function method is significantly affected by it.
Fletcher [14] proposed the filter algorithm which can effectively avoid the choice of penalty parameters. It is inspired by the idea of multi-objective programming in 2002, where the objective function and the constraint violation function are considered separately. The combination of the filter and SQP methods has been applied to the minimax problem due to its satisfactory numerical results. [15] gave a trust-region SQP filtering method combining nonmonotonic techniques to solve the unconstrained minimization problem. Luo [10] constructed a new feasible sub-problem based on working sets and incorporated filtering techniques. Although the filter method has good numerical performance, the update of the filter set also faces the problem that the set is getting larger and the computational storage is growing. On the other hand, the feasibility restoration phase is difficult to avoid in the filter method, which more or less increases the computational effort.
To overcome the possible inconsistency in solving the sub-problem and the high computational cost, Panier [16] proposed a QP-free algorithm (SSLE algorithm) for optimization problems with inequality constraints based on the KKT conditions and Newton’s method. Each iteration requires solving two systems of linear equations with the same coefficient matrix and a least-squares subproblem. Global and superlinear convergence are established without the assumption of stationary point isolate. In [11], Jian and Ma presented a new QP-free algorithm for minimax problems according to the unique structure of these problems. [17, 18] proposed two QP-free algorithms for solving constrained optimization problems respectively.
Inspired by the above study, a nonmonotonic QP-free algorithm without a penalty function or a filter is given in this paper for the minimax problem. And the global convergence, as well as the superlinear convergence under some mild conditions, is proved. This algorithm combines the NCP function to solve in each iteration two nonlinear systems of equations with the same coefficient matrix, which can be viewed as a Newton-quasi-interaction perturbation of the primal and dyadic variables of the KKT condition. An adjustable operator is introduced, which changes in each iteration according to the results of the previous iteration, thus changing the degree of influence of the objective function in this mechanism. A nonmonotonic mechanism is used to avoid the Maratos effect. The working set is introduced to reduce the computational effort further.
The paper consists of the following parts. Section 1 introduces the previous methods for solving minimax problem. In section 2, the structure of the work is described. Section 3 discusses the implementation of the algorithm. Section 4 discusses the global convergence and superlinear convergence rate of the algorithm. In section 5, numerical results and practical application are given. The article has been summarized in the final.
Description of algorithm
Preliminaries
Define the following notations:
The following function hi(X) is defined to represent the constraint of (2),where are Lipschitz continuously functions.
Let .
(2) are equivalent to(3)where ω(X) = t.
The Lagrangian function is defined as(4)where μ = (μ1, ⋯, μl, μl+1, ⋯, μm)T is the multiplier vector.
is called the KKT point for (3) if the following conditions hold,(5)
To construct the system of equations, we introduce the nonlinear complementarity problem (NCP) function, and φ(a, b) is called an NCP function if the following relationship holds,(6)
The NCP function is Lipschitz continuous and differentiable except for the origin. Strong semi-smoothness holds at (0, 0). The Fischer-Burmeister NCP function is a simple NCP function with good theoretical properties and numerical performance.
The Fischer-Burmeister NCP function used in this paper has the following structure:
So we have(7)
Then the NCP function Φi is defined bywhere
According to (5), define(8)
Clearly, KKT condition (5) holds equivalent to Φ(X, μ) = 0.
Similar to (7), if (hi(X), μi) ≠ (0, 0), thenwhere ei = (0, ⋯, 0, 1, 0, ⋯, 0)T, and
If (hi(X), μi) = (0, 0), introduce the following notations:
It holds that
Let , j ∈ W. is the Hessian matrix of and W is active set.
Define the coefficient matrix Vk according to (8)where Bk is a approximation of the Hessian matrix of L(X, μ), and. where c > 0 and τ > 1 are given parameters.
In this work, to increase the convergence and flexibility of the algorithm, we substitute the objective function with the following equation:(9)
We introduce such an adjustable operator, which is not a penalty parameter, but is changed in each iteration based on the effect of the iteration results. Fig 1 shows the initial situation of taking δ = 1.
[Figure omitted. See PDF.]
We give the following filter equivalence mechanism:(10)or both(11) (12)
There are three regions in the first quadrant. If the trial point Xk is located in region I at the kth iteration, i.e., (10) is satisfied at the current trial point, but (11) and (12) are not satisfied. Then this point is accepted. If the trial point Xk is located in reject region, i.e., the function value and constraint violation are not yielded a satisfactory decrease. So the point is rejected. If the iteration point lies in region II, i.e., (10) is not hold, but (11) and (12) are satisfied. It means that the objective function is improved, but the constraint violation function does not reach the sufficient descent condition, so we need to tighten up our acceptance region (see Fig 2).
[Figure omitted. See PDF.]
So we adjust the parameter δk as follows,(13)where is a constant.
If Xk is located in region III, which means the algorithm makes a good improvement in the objective function and the constraint violation function, So we intend to relax the acceptance criteria and expect further improvements. This means increasing the value of δk to make the rejected region narrower. (see Fig 3).
[Figure omitted. See PDF.]
Adjust δk as follows.(14)
Algorithm A
Based on the above analysis, we give algorithm A for the problem (1).
Step0: (Initialization.)
Choose an initial point ;;,.
Step1: (Working set.)
Step1.1 Set i = 0, ϵk,i = ϵk−1.
Step1.2 Compute .
Step1.3 If , set , and ϵk = ϵk,i, go to Step 2. Otherwise, let ϵk,i+1 ≔ βϵk,i, let i ≔ i + 1,go to Step 1.2.
Step2:(Search directions.)
Step2.1 If , compute by solving (15) in (d, λ):(15)Let(16)
Step2.2 Compute by solving (17) in (d, λ):(17)Let(18)
If , then dk = dk0, λk = λk0; else, if dk = 0, then we let dk = dk1, λk = λk1. If neither is satisfied, then compute ρk according to the following definition:
If , , then , else ρk = γ3. Denote
Step3: (Adjustment nonmonotonous line search.)
Step3.1 If(19)and (10) or (12) at least one is satisfied, update δk by (14), Let(20)
Step3.2 Let M be a non-negative integer, and for each k ≥ 1, let q(k) satisfy
Defined(21)where αk = τj, j is the smallest non-negative integer that satisfying(22)or both(23) (24)
Step4: (Update.)
If (10)-(12) hold, update δk by (14), and let(25)
Else, If (10) holds, but (11) and (12) are not all satisfied, then update Xk and μk by (25).
Otherwise, update δk by (13), and let(26)
Compute Bk+1 by BFGS updated formula.
If (10) holds at Xk+1 but not at Xk, ; else, .
If (10) holds, update ιk by
Else, ιk+1 = ιk. k = k + 1, go back to step 1.
Remark 2.1 The Eqs (10) and (12) are composed of Lagrange multipliers and the KKT condition. The solution of the system satisfies the first-order optimality condition of the original problem.
Remark 2.2 For convenience, if (10) holds, then the iterative step is called is called Φ-step; if (11) and (12) hold, it is called Θ-step.
Remark 2.3 Obviously, δk is bounded. And the Lagrangian function is Lipschitz continuous.
Implementation of algorithm
Assumption A1 Bk is positive definite and there exists positive numbers m1 and m2 such that(27)for all and all k.
Assumption A2 {X ∣ ω(X) ≤ ω(X0)} and ‖μk + λk‖ are bounded as k is sufficiently large.
Assumption A3 and are Lipschitz continuously differentiable. For all ,where m0 > 0 is the Lipschitz constant.
Assumption A4 The Mangasarian-Fromovitz qualification condition (MFCQ) is satisfied at , i ∈ W(X).
Assumption A5 The sequence of {Bk} satisfies
Assumption A6 The strict complementarity condition holds at (X*, μ*).
Remark 3.1 It follows from A3 that the Lagrangian function (4) is Lipschitz continuous.
The following lemmas show the algorithm is well defined.
Lemma 3.1 Step 1 is finitely terminated.
proof Assume that the conclusion is not valid, then step 1 will run an infinite number of times.(28)
From the definition of Wk, we know that Wk,i+1 ⊆ Wk,i. As i is large enough, Wk,i = Wk,i+1 marked as .
Then we have and .
That is in contradiction to A4.
Lemma 3.2 If Φk ≠ 0, Vk is nonsingular for all k.
proof Let Vk(uk, vk) = 0, where u = (u1, …, um)T,v = (v1, …, vm)T. From (17), we have(29) (30)
Due to Φk ≠ 0, there has and for any j by their definitions.(31)
Taking (31) to (29), then pre multiplying (29) by uT, we getwhere Bk is positive definite, and is semi-definite.
So we got u = 0. Submitting u = 0 to (31), then v = 0. Since (u, v) = 0 is the unique solution of Vk(uT, vT)T = 0, Vk is nonsingular, the conclusion holds.
Lemma 3.3 If dk0 ≠ 0,then>where Bk is an approximation of the Hessian matrix of L(X, μ); ∇ωk is the gradient vector of ω(Xk, μk).
proof From (15), we have(32) (33)
Then(34)
Taking (34) to (32), then
The matrix is positive semi-definite, so we have
Hence the conclusion holds.
Lemma 3.4 For any 0 < α ≤ 1, there exists t1 > 0 such that(35)
proof If , then from (31), there exists t1 > 0 such that for any 0 < α ≤ 1,
Therefore the conclusion holds for .
If , then φi(0, 0) = 0 indicates(36)where diag(ζk) and diag(νk) are the diagonal matrices with diagonal elements and .
So we have(37)
Hence the conclusion holds.
Lemma 3.5 If , for any ε > 0, there exists , such that for any ,(38)
proof If , by (SSLE) we have(39)
For any i ≠ 0,(40)
By (36) and (40)(41)
Since , by the definitions of and and , the following equation holds(42)
By (41) and (42), given any ε > 0, there is such that, for any ,(43)
Hence the conclusion holds.
From Lemmas 3.4–3.5, we can obtain the following Lemma 3.6.
Lemma 3.6 If then given any ε > 0, there exists such that, for any ,(44)
Theorem 3.1 If Algorithm A does not terminate at Xk, then there exits an αmin > 0, such to αk ≥ αmin > 0, we have either (10) holds, or both (11) and (12) hold.
proof If ρk = γ3, for all k, and any , there have
So (10) holds.
If , by the definition of ρk, , . From Lemma 6, define , for all sufficiently large k, we have(45)and
The proof is complete.
Convergence of algorithm
In this section, the global and superlinear convergence rates of this algorithm have been discussed.
If Φk = 0, then (Xk, μk) is KKT point. From Lemma 3.3, if dk0 ≠ 0, then dk is the decreasing direction of ωk, (dk, λk) is the decreasing direction of . If and dk0 = 0, (Xk, μk) satisfies the KKT condition. In the following, without loss of generality, it is assumed that the algorithm does not terminate finitely.
Lemma 4.1 Assume A1-A4 hold. ΦI(Xk, μk) → 0, as k is large enough.
proof Case 1. If (10) holds for all k sufficiently large, one has
The sequence is monotone decreasing, which implies it is convergent. From (10), we obtain
So {‖ΦI(Xk+1, μk+1)‖} is convergent. Together with θ1 ∈ (0, 1), we havewhich illustrates ΦI(Xk, μk) → 0.
case 2. If (11) and (43) hold for all k sufficiently large, we prove the conclusion by contradiction. Then there exists ϵ1 > 0, such thatand(46)
By (12), (43) and the definition of θ and , we have
Because {ωk} is monotonically decreasing. Then ωk → −∞ as k → +∞ which is contradictory to the hypothesis.
Case 3. If the Φ-step and Θ-step iterative appear alternately.
According to Remark 2.2, the iterations from kt to kt+1 and kt+2 + 1 to kt+3 are Θ-steps. And the iteration from kt+1 + 1 to kt+2 iteration are Φ-steps.
is updated only when the transition from the Θ-step to the Φ-step, which indicates that the constraint violation is still large enough to decrease. From step 4, we know if is updated, then
Thus is monotonous descent and . Meantime, we have
Donate , then we have
The sequence is monotonous descent for all α ≥ 0, and
So . Considering the non-negativity of , we have . Along with (19), . Therefore .
Lemma 4.2 Assume A1-A4 hold. If k is large enough, then dk0 → 0, dk → 0.
The proof is similar to Lemma 5.4 in [20].
With Lemma 4.1 and 4.2, we can obtain the global convergence of the algorithm.
Theorem 4.1 The accumlation point (X*, μ*) of the sequence {(Xk, μk)} is the KKT point pair of problem (15).
The convergence rate of the algorithm is discussed next. To make the algorithm converge superlinear, assumptions 5–6 are added.
Remark 4.1 A5 illustrates (Xk, μk) is a Newton direction with a high order perturbation. A6 shows that Φ is continuously differentiable at each KKT point (X*, μ*).
Lemma 4.3 The above assumptions hold, then {‖(Vk)−1‖} and are uniformly bounded. The accumulation matrix V* of {Vk} is nonsingular.
The proof is similar to Lemma 5.5 in [19].
Lemma 4.3 implies that the Theorem 4.2 holds.
Theorem 4.2 Assume A1-A6 hold, the algorithm is superlinearly convergent, i.e., (Xk, μk) converges to (X*, μ*) superlinearly.
Numerical results
In this section, the numerical results are shown. Problem 1 the section is taken from [19]. The BFGS formula proposed by Broyden et al. is used to update Bk+1 as [20].
Tables 1 and 2 show the numerical experimental results of two test questions, where NIT is the number of iterations, NF is the number of times the objective and constraint functions are computed, and NG is the number of times the gradient is computed. and TIME(Unit in seconds) is the CPU runtime. The numerical experiments were computed by Matlab R2020 on a computer with 16.0GB RAM and Intel 11th Gen Intel(R) Core(TM) i5–11320H @ 3.20GHz.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
The parameters involved in the algorithm are chosen as follows: ,τ = 2.5, c = 0.2, , γ = 0.5, γ1 = 0.1, γ2 = 0.95, γ3 = 0.5, θ = 0.9, θ1 = 0.95, θ2 = 0.1, , ι0 = 1,μ = [1, 0, …, 0, 1].
Test problem 1 [21].where
Test problem 2 [21].where
Then we discuss the application of algorithms to investment portfolios. In the investment problem proposed by Markowitz, there are two objective functions to be considered. One is to maximize the return of the portfolio, and the other is to reduce the risk. In the traditional model, the latter is to minimize the risk (variance) of a set of feasible portfolios for a given level of expected returns. By varying the expected return level as the two objectives for a set of nondominant portfolios, the efficient frontier on returns is determined by the variance and average of the yields in the Markowitz model. Investors can get a suitable portfolio by analyzing the expected investment and return.
In the traditional Markowitz mean-variance model, it is assumed that the investor has some wealth and is ready to invest in a set of securities, which is recorded as a set P. Rk represents the return value of each security k, which is a random variable. The mean value of Rk can be calculated from historical data. Define the expected return of a security as μk = E(Rk), k = 1, ⋯, P. xk is the proportion allocated to a certain security. The weight vector xk needs to satisfy the following constraints,where e = {1, ⋯, 1}T ∈ RP.
The expected returns of the portfolio are as follows:where μ = (μ1, ⋯, μP)T.
The variance of the portfolio iswhere covariance matrix Q = {Qi,j} is a symmetric positive semi-definite matrix with uncertain information. Assume that the short-term investment value is uncertain, but there are obviously xk ≥ 0, k = 1, ⋯, n. we let Bk and Ak be the upper and lower bounds for xk, that is, Bk ≤ xk ≤ Ak, k = 1, ⋯, P.
According to the above definition, the base/mean-variance dual-objective optimization problem [22] can be described aswhere , and
Note that sign(‖xk‖) is discontinuous, so we introduce the following approximation function which is locally lipschitz continuity,(47)where δ > 0. For any xk ∈ R, .
Let y = (x, yP+1)T ∈ RP+1, we get the continuous approximation problem as follows:(48)where w(y) = (ω1(x1, δ), ⋯, ω(xp, δ), yP+1)T. (48) can be regarded as a minimax problem, which can be solved using the algorithm 1. We take the case of P = 2, the optimal portfolio obtained is [0.5;0.5]. This result illustrates that under the condition that the risk and return of each stock are equal, the proportion of each stock in the optimal strategy is determined by the investor’s investment willingness.
Conclusion
In this paper, we give the properties of global convergence and the global convergence of an adaptive QP-free method for solving the minimization problem. Combining the NCP function and the multiplier, in each iteration two systems of linear equations with the same coefficient matrix are solved, which can be viewed as a perturbation of the primal Variables and dual variables of the KKT condition by the Newton-quasi interaction. A new filter substitution mechanism is given, which retains the advantages of the filter method, avoids the selection of penalty parameters, and eliminates potential storage problems that may arise from the filter. And the objective function is tuned by introducing a flexible operator. A non-monotonic mechanism is used to avoid the Maratos effect to some extent, and the introduction of working set further reduces the workload. The effectiveness and convergence of the intensity algorithm are demonstrated under the assumption of no stability point isolation.
Citation: Su K, Liu S, Lu W (2023) The global convergence properties of an adaptive QP-free method without a penalty function or a filter for minimax optimization. PLoS ONE 18(7): e0274497. https://doi.org/10.1371/journal.pone.0274497
About the Authors:
Ke Su
Contributed equally to this work with: Ke Su, Shaohua Liu
Roles: Conceptualization, Writing – original draft
¶‡ KS and SL also contributed equally to this work.
Shaohua Liu
Contributed equally to this work with: Ke Su, Shaohua Liu
Roles: Conceptualization, Writing – original draft
E-mail: [email protected]
¶‡ KS and SL also contributed equally to this work.
Wei Lu
Roles: Software
1. Hare W., Nutini J. A derivative-free approximate gradient sampling algorithm for finite minimax problems[J]. Comput Optim Appl, 2013, 56: 1–38.
2. Burke J.V., Curtis F.E., Lewis A.S., Overton M.L., Sim?es L.E.A. Gradient Sampling Methods for Nonsmooth Optimization[M]. Numerical Nonsmooth Optimization, 2020: 201–225.
3. Antonio F., Manlio G., Giovanni G., Giovanna M., A partially inexact bundle method for convex semi-infinite minmax problems[J]. Communications in Nonlinear Science and Numerical Simulation, 2015, 21(1-3): 172–180.
4. Shor N.Z. The Subgradient Method. In: Minimization Methods for Non-Differentiable Functions[J]. Springer Series in Computational Mathematics, 1985, 3: 22–47. https://doi.org/10.1007/978-3-642-82118-9_3
5. Ma G, Zhang Y, Liu M. A generalized gradient projection method based on a new working set for minimax optimization problems with inequality constraints[J], J Inequal Appl, 2017, 51: 1–14. pmid:28298875
6. Obasanjo E., Tzallas-Regas G. Rustem B. An Interior-Point Algorithm for Nonlinear Minimax Problems[J]. J Optim Theory Appl, 2010, 144: 291–318.
7. EL-Sobky B.,Aboutahoun A.W. An active-set algorithm and a trust-region approach in constrained minimax problem[J]. Comp. Appl. Math, 2018, 37: 2605–2631.
8. Jian J, Mo X, Qiu L et al. Simple Sequential Quadratically Constrained Quadratic Programming Feasible Algorithm with Active Identification Sets for Constrained Minimax Problems[J]. J Optim Theory Appl, 2014, 160: 158–188.
9. Ma C, Lin X, Yao J, et al. New exact penalty function for solving constrained finite min-max problems[J]. Applied Mathematics and Mechanics, 2012, 33(2): 253–270.
10. Luo Z, Wang Z, Li R. Improved Filter-SQP Algorithm with Active Set for Constrained Minimax Problems[J]. Journal of Applied Mathematics. 2014, 2: 1–7.
11. Jian J, Ma G. A Globally Convergent QP-Free Algorithm for Inequality Constrained Minimax Optimization[J]. Acta Mathematica Scientia, 2020, 40(6): 1723–1738.
12. HU Q, HU J. A sequential quadratic programming algorithm for nonliear minimax problems[J]. Bulletin of the Australian Mathematical Society[J], 2007, 76(3): 353–368.
13. Courant R. Variational methods for the solution of problems of equilibrium and vibrations[J]. Bull. Amer. Math. Soc. 1943, 49, 1–23.
14. Fletcher R. and Leyyfer S.. Nonlinear programming without a penalty function[J]. Mathemacal Pro-gramming, 2002, 91: 239–269.
15. Zhao Q, Guo N. A Nonmonotone Filter Method for Minimax Problems[J]. Applied Mathematics, 2011, 2: 1372–1377.
16. Panier ER, Tits AL, Herskovits JN: A QP-free, globally convergent, locally superlinearly convergent algorithm for inequality constrained optimization[J]. SIAM J. Control Optim, 1988, 26: 788–811.
17. Pu D, Liu A, Shang Y, et al. QP-free infeasible method without a penalty function and a filter [J]. Operations Research Transactions, 2013, 17(1): 25–27.
18. Shang Y, Jin Z, Pu D. A new filter QP-free method for the nonlinear inequality constrained optimization problem[J]. J Inequal Appl, 2018, 287: 1–14. pmid:30839771
19. Chen L, Wang Y, He G. A feasible active set QP-free method for nonlinear programming[J]. SIAM J. Optim, 2006, 17: 401–429.
20. Yang Y, Li D, Qi L: A feasible sequential linear equation method for inequality constrained optimization[J]. SIAM J. Optim, 2003, 13: 1222–1244.
21. Karmitsa, N. Test problems for large-scale nonsmooth minimization. Reports of the Department of Mathematical Information Technology, Series B.
22. Brito R. P., Vicente N. L. Efficient cardinality/mean-variance portfolios[J]. System Modeling and Optimization, 2014, 443: 52–73.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 Su et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In this paper, we proposed an adaptive QP-free method without a penalty function or a filter for minimax optimization. In each iteration, solved two linear systems of equations constructed from Lagrange multipliers and KKT-conditioned NCP functions. Based on the work set, the computational scale is further reduced. Instead of the filter structure, we adopt a nonmonotonic equilibrium mechanism with an adaptive parameter adjusted according to the result of each iteration. Feasibility of the algorithm are given, and the convergence under some assumptions is demonstrated. Numerical results and practical application are reported at the end.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer