This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction and Motivation
Nonsmooth optimization problems (NSO) arise from many fields of applications in engineering [1], economics [2], mechanics [3], and optimal control [4]. For example, multiobjective nonsmooth optimization has also been applied in many fields of engineering where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives [5]. There exist several approaches to solving NSO, see [6–10]. Bundle methods are currently among the most efficient optimization methods; they can be used to study the engineering problem of the safe evaluation technology for concrete dams by applying nonsmooth bundle ideas to hydrostructure antiseismic fields [11–14]. These methods are based on the cutting plane method [15, 16], where the convexity of the objective function is the fundamental assumption. If the objective function
However, for nonconvex cases, the corresponding model function does not stay below the objective function
In this paper, we propose a new algorithm for constrained nonconvex optimization. The algorithm is based on the construction of two kinds of approximations to the objective function, and these two kinds of approximations correspond to the convex and concave behaviors of the objective function at the current point. If the linearization error is positive, we build a local lower approximation to the objective function, otherwise a local upper approximation is constructed. Besides that, the method employs
This paper is organized as follows: in Section 2, the model for constrained nonconvex optimization is established by using the exact penalty function. The nonconvex bundle algorithm is presented in Section 3. In Section 4, we prove that the sequence of serious steps generated by the proposed algorithm converges to an approximate stationary point of exact penalty function with weakly semismooth objective functions. Preliminary numerical experiments are provided in Section 5. Finally, some conclusions are given in Section 6.
2. Derivation of the Model
Consider the following constrained nonconvex optimization problem:
For convex function
If
It should be noted that
Let
Let
Let
Let
Substituting (14)–(16) into
3. Algorithm
In this section, we present a nonconvex bundle algorithm for constrained nonconvex optimization problem (1). Our method is based on repeatedly solving problem (12).
A few comments on Algorithm 1 are in order.
Algorithm 1: A proximal bundle algorithm for nonconvex functions.
Step 0 (Initialization):
Choose
Step 1 (Safeguard parameters setting):
If
Step 2 (Direction finding):
Solve
where
Step 3 (Penalty updating):
If
Step 4 (Stationarity test):
Set
Calculate
If
Step 5 (Trial point calculating):
Compute
Step 6 (Insertion of index):
(a) If
(b) Else, if
(c) Else find a scalar
(d) Insert the element
Step 7 (Descent test):
If
set
Step 8 (Bundle updating):
Select sets
End of Algorithm1
The stopping criterion in Step 1 is used to assess the stationarity of current stability center. If it is satisfied, the approximate stationarity of exact penalty function is achieved, Algorithm 1 stops, and the approximate solution is obtained.
The solution
The result
The right-hand side of the above inequality is more than
Large
Finally, note that the insertion of a bundle index into
4. Convergence Results
The presented work in this section follows a line of investigation initiated in [6, 7], where nonconvex bundle algorithm is used to solve unconstrained minimization problem and the idea of exact penalty function is employed in proximal bundle method for constrained convex minimization problem. Here, we expand and generalize the central idea [6] to constrained nonconvex minimization problems; some techniques have to be adjusted to the new situations for the presence of constraints and nonconvexity.
Throughout the section, we make the following assumptions:
(A1)
(A2) The set
(A3) The feasible set
(A4) The Slater constraint qualification holds, i.e., there exists
The assumption that the feasible set of problem (1) is bounded is usual and reasonable; it was also assumed in [7, 32–34]. In [27], the boundedness of the feasible set was assumed in order to guarantee the existence of the supremum of the range of a set-valued mapping on the feasible set. In [32], the authors assumed the feasible sets were bounded closed convex for finding the saddle point of the objective function.
Lemma 1.
Let
(i) There is an index
(ii) Step 6(c) is appropriate, feasible, and not difficult to realize.
(iii) Whenever a new bundle index is inserted into
Proof.
(i) Since
(ii) According to Assumption (A1),
Since the sufficient decrease condition Algorithm 1 is not satisfied, we have
there exists a scalar
(iii) By construction of Algorithm 1, we have
the condition
The next lemma shows the finite termination of the inner iteration.
Lemma 2.
The inner iteration terminates after a finite number of steps.
Proof.
It is enough to demonstrate that, in a finite number of steps, either the condition of the stop at Step 1 or the exit at Step 4 is satisfied. Firstly, we prove Algorithm 1 cannot pass through Step 4 infinitely many times. Assume that such a case occurs, since at each iteration, the algorithm enters Step 4, then we have
According to Step 6, we insert an index into
Next, we show that it is impossible to have
Combing (28) and (29), we obtain
The next lemma shows that the penalty coefficients are increased finitely many times under the conditions of Slater constraint qualification and the boundedness of
Lemma 3.
There exists
Proof.
Denote the Lagrangian of
The Lagrange multipliers satisfy the usual saddle-point condition:
Since
Recalling that
Lemma 4.
There exist an index
Proof.
According to the result of Lemma 3, there exists one
Note that Lemmas 3 and 4 ensure the number of iterations between Step 2 and Step 3 is finite, and the penalty coefficients stay unchanged after finitely many iterations.
Now, we are ready to prove the overall finiteness of Algorithm 1.
Theorem 5.
Suppose Assumptions (A1)–(A4) hold, then for any
Proof.
For contradiction, assume that the approximate stationarity condition (37) cannot be satisfied for an infinite number of iterations. In other words, the termination condition
Since
Note that
5. Numerical Experiments
To assess practical performance of the presented method, we coded Algorithm 1 in MATLAB and ran it on a PC with 1.80 GHz CPU.
5.1. Examples for Nonconvex Optimization Problems
In this subsection, we first introduce the nonconvex test problems. We prefer a series of polynomial functions developed in [35], also see [23, 36]. For each
There are four classes of test functions defined by
Example 1.
Consider problem (1):
For objective function, we define the nonconvex function
For constraint function, we consider the pointwise maximum of a finite collection of quadratic functions:
For problem (1) with (43) and (44), we can obtain that
The initial point:
Optimal solution:
The final objective function value:
The final constraint function value:
The CPU time: 0.25 seconds.
Example 2.
Consider problem (1):
For objective function, we define the nonconvex function
For constraint function, we consider the pointwise maximum of a finite collection of quadratic functions:
For problem (1) with (46) and (47), we can obtain that
The initial point:
Optimal solution:
The final objective function value:
The final constraint function value:
The CPU time: 15.47 seconds.
The above two examples show that the proposed Algorithm 1 does perform not badly since the optimal solutions
6. Conclusion
For constrained nonconvex optimization problem, we propose an implementable algorithm by combining bundle ideas, proximal control, and exact penalty functions. The results extend the ideas of cutting plane and proximity control to the constrained nonconvex case. We present some techniques for choosing penalty coefficients which ensures the limitation of penalty growth. The penalty parameters are increased only a finite number of times which prevents the algorithm from following closely the curvature boundary of the constrained set. For weakly semismooth functions, the convergence of the presented algorithm to an approximate stationary point of the exact penalty function is proved without any additional assumptions except for the conditions of Slater constraint qualification and the boundedness of the constrained set.
Acknowledgments
This work was supported by the National Nature Science Foundation of China (grant numbers 61877032 and 11601061) and Basic Scientific Research Project of Educational Department of Liaoning Province (grant number JYTMS20231042).
[1] E. S. Mistakidis, G. E. Stavroulakis, Nonconvex Optimization in Mechanics. Smooth and Nonsmooth Algorithms, Heuristics and Engineering Applications, 1998.
[2] J. Outrata, M. Kocvara, J. Zowe, Nonsmooth Approach to Optimization Problems with Equilibrium Constraints. Theory, Applications and Numerical Results, 1998.
[3] J. J. Moreau, P. D. Panagiotopoulos, G. Strang, Topics in Nonsmooth Mechanics, 1988.
[4] F. H. Clarke, Y. S. Ledyaev, R. J. Stern, P. R. Wolenski, Nonsmooth Analysis and Control Theory, 1998.
[5] K. Miettinen, Nonlinear Multiobjective Optimization, 1999.
[6] A. Fuduli, M. Gaudioso, G. Giallombardo, "Minimizing nonconvex nonsmooth functions via cutting planes and proximity control," SIAM Journal on Optimization, vol. 14 no. 3, pp. 743-756, DOI: 10.1137/s1052623402411459, 2004.
[7] K. C. Kiwiel, "Exact penalty functions in proximal bundle methods for constrained convex nondifferentiable minimization," Mathematical Programming, vol. 52 no. 1-3, pp. 285-302, DOI: 10.1007/bf01582892, 1991.
[8] L. P. Pang, Z. Q. Xia, "A PVT-type algorithm for minimizing a nonsmooth convex function," Serdica Mathematical Journal, vol. 29 no. 1, pp. 11-32, 2003.
[9] L. P. Pang, Z. Q. Xia, L. W. Zhang, "On a second order parallel variable transformation approach," Journal of Applied Mathematics and Computing, vol. 11 no. 1-2, pp. 201-213, DOI: 10.1007/bf02935732, 2003.
[10] M. Gaudioso, E. Gorgone, "Gradient set splitting in nonconvex nonsmooth numerical optimization," Optimization Methods and Software, vol. 25 no. 1, pp. 59-74, DOI: 10.1080/10556780903236911, 2010.
[11] C. Lemarechal, "Lagrangian relaxation," Computational combinatorial optimization: optimal or provably near-optimal solutions, pp. 112-156, 2001.
[12] D. Noll, "Bundle method for non-convex minimization with inexact subgradients and function values," Computational and Analytical Mathematics, Volume 50 of Springer Proceedings in Mathematics and Statistics, pp. 555-592, 2013.
[13] W. Hare, C. Sagastizabal, M. Solodov, "A proximal bundle method for nonsmooth nonconvex functions with inexact information," Computational Optimization and Applications, vol. 63 no. 1,DOI: 10.1007/s10589-015-9762-4, 2016.
[14] Y. Yang, L. Pang, X. Ma, J. Shen, "Constrained nonconvex nonsmooth optimization via proximal bundle method," Journal of Optimization Theory and Applications, vol. 163 no. 3, pp. 900-925, DOI: 10.1007/s10957-014-0523-9, 2014.
[15] E. W. Cheney, A. A. Goldstein, "Newton’s method for convex programming and Tchebycheff approximation," Numerical Mathematics, vol. 1, pp. 253-268, DOI: 10.1007/bf01386389, 1959.
[16] J. E. Kelley Jr., "The cutting-plane method for solving convex programs," Journal of the Society for Industrial and Applied Mathematics, vol. 8 no. 4, pp. 703-712, DOI: 10.1137/0108053, 1960.
[17] C. A. Sagastizabal, M. V. Solodov, "An infeasible bundle method for nonsmooth convex constrained optimization without a penalty function or a filter," SIAM Journal on Optimization, vol. 16 no. 1, pp. 146-169, DOI: 10.1137/040603875, 2005.
[18] E. Karas, A. Ribeiro, C. A. Sagastizabal, M. V. Solodov, "A bundle-filter method for nonsmooth convex constrained optimization," Mathematical Programming, vol. 116 no. 1-2, pp. 297-320, DOI: 10.1007/s10107-007-0123-7, 2009.
[19] K. C. Kiwiel, "An exact penalty function algorithm for nonsmooth convex constrained minimization problems," IMA Journal of Numerical Analysis, vol. 5 no. 1, pp. 111-119, DOI: 10.1093/imanum/5.1.111, 1985.
[20] H. Schramm, J. Zowe, "A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results," SIAM Journal on Optimization, vol. 2 no. 1, pp. 121-152, DOI: 10.1137/0802008, 1992.
[21] K. C. Kiwiel, "Restricted step and Levenberg-Marquardt techniques in proximal bundle methods for nonconvex nondifferentiable optimization," SIAM Journal on Optimization, vol. 6 no. 1, pp. 227-249, DOI: 10.1137/0806013, 1996.
[22] D. Noll, "Cutting plane oracles to minimize non-smooth non-convex functions," Set-Valued and Variational Analysis, vol. 18 no. 3-4, pp. 531-568, DOI: 10.1007/s11228-010-0159-3, 2010.
[23] W. Hare, C. Sagastizabal, "A redistributed proximal bundle method for nonconvex optimization," SIAM Journal on Optimization, vol. 20 no. 5, pp. 2442-2473, DOI: 10.1137/090754595, 2010.
[24] A. Fuduli, M. Gaudioso, E. A. Nurminski, "A splitting bundle approach for non-smooth non-convex minimization," Optimization, vol. 64 no. 5, pp. 1131-1151, DOI: 10.1080/02331934.2013.840625, 2015.
[25] A. Fuduli, M. Gaudioso, G. Giallombardo, "A DC piecewise affine model and a bundling technique in nonconvex nonsmooth minimization," Optimization Methods and Software, vol. 19 no. 1, pp. 89-102, DOI: 10.1080/10556780410001648112, 2004.
[26] J. V. Burke, A. S. Lewis, M. L. Overton, "A robust gradient sampling algorithm for nonsmooth, nonconvex optimization," SIAM Journal on Optimization, vol. 15 no. 3, pp. 751-779, DOI: 10.1137/030601296, 2005.
[27] K. C. Kiwiel, "Convergence of the gradient sampling algorithm for nonsmooth nonconvex optimization," SIAM Journal on Optimization, vol. 18 no. 2, pp. 379-388, DOI: 10.1137/050639673, 2007.
[28] W. Hare, C. Sagastizabal, "Computing proximal points of nonconvex functions," Mathematical Programming, vol. 116 no. 1-2, pp. 221-258, DOI: 10.1007/s10107-007-0124-6, 2009.
[29] K. C. Kiwiel, "A dual method for certain positive semidefinite quadratic programming problems," SIAM Journal on Scientific and Statistical Computing, vol. 10 no. 1, pp. 175-186, DOI: 10.1137/0910013, 1989.
[30] F. H. Clarke, Optimization and Nonsmooth Analysis, 1983.
[31] A. Fuduli, M. Gaudioso, "The proximal trajectory algorithm for convex minimization," 1998. Technical Report 7/98
[32] C. Lemarechal, A. Nemirovskii, Y. Nesterov, "New variants of bundle methods," Mathematical Programming, vol. 69 no. 1-3, pp. 111-147, DOI: 10.1007/bf01585555, 1995.
[33] R. Mifflin, "An algorithm for constrained optimization with semismooth functions," Mathematics of Operations Research, vol. 2, pp. 191-207, DOI: 10.1287/moor.2.2.191, 1977.
[34] R. Fletcher, S. Leyffer, "A bundle filter method for nonsmooth nonlinear optimization," 1999. Numerical Analysis Report NA/195
[35] C. Ferrier, "Bornes Duales de Problemes d’Optimisation Polynomiaux," 1997. Ph.D. thesis
[36] C. Ferrier, "Computation of the distance to semi-algebraic sets," ESAIM: Control, Optimisation and Calculus of Variations, vol. 5, pp. 139-156, DOI: 10.1051/cocv:2000104, 2000.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2024 Jie Shen et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
An implementable algorithm for solving nonsmooth nonconvex constrained optimization is proposed by combining bundle ideas, proximity control, and the exact penalty function. We construct two kinds of approximations to nonconvex objective function; these two approximations correspond to the convex and concave behaviors of the objective function at the current point, which captures precisely the characteristic of the objective function. The penalty coefficients are increased only a finite number of times under the conditions of Slater constraint qualification and the boundedness of the constrained set, which limit the unnecessary penalty growth. The given algorithm converges to an approximate stationary point of the exact penalty function for constrained nonconvex optimization with weakly semismooth objective function. We also provide the results of some preliminary numerical testing to show the validity and efficiency of the proposed method.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer