(ProQuest: ... denotes non-US-ASCII text omitted.)
Academic Editor:Wei Bian
1, Department of Applied Mathematics, Yanshan University, Qinhuangdao 066001, China
Received 5 December 2013; Accepted 18 January 2014; 27 February 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
It is well known that nonlinear optimization problems arise in a broad variety of scientific and engineering applications including optimal control, structure design, image and signal progress, and robot control. Most of nonlinear programming problems have a time-varying nature; they have to be solved in real time. One promising approach to solve nonlinear programming problems in real time is to employ recurrent neural networks based on circuit implementation.
In the past two decades, neural networks for optimization are studied massively and many good results are obtained in the literature; see [1-19] and references therein. In particular, Liang and Wang developed a recurrent neural network for solving nonlinear optimization with a continuously differentiable objective function and bound constraints in [4]. A projection neural network was proposed for solving nondifferentiable nonlinear programming problems by Xia et al., in [20]. In [9, 19], Xue and Bian developed a subgradient-based neural network for solving nonsmooth convex or nonconvex optimization problems with a nonsmooth convex or nonconvex objective function.
It should be noticed tha, many nonlinear programming problems can be formulated as nonconvex optimization problems, and among nonconvex programming, as a special case, pseudoconvex programmings are found to be more prevalent than other nonconvex programming. Pseudoconvex optimization problem has many applications in practice, such as fractional programming, computer vision, and production planning. Very recently, Liu et al. presented a one-layer recurrent neural network for solving pseudoconvex optimization subject to linear equality in [1]; Hu and Wang proposed a recurrent neural network for solving pseudoconvex variational inequalities in [10]. Qin et al. proposed a new one-layer recurrent neural network for nonsmooth pseudoconvex optimization in [21].
Motivated by the works above, our objective in this paper is to develop a one-layer recurrent neural network for solving pseudoconvex optimization problem subject to a box set. The proposed network model is an improvement of the neural network model presented in [10]. To the best of our knowledge, there are few works treating of the pseudoconvex optimization problem with a box set constraint.
For convenience, some notations are introduced as follows. ... denotes the set of real numbers, ...n denotes the n -dimensional Euclidean space, and ...m×n denotes the set of all m×n real matrices. For any matrix A , A>0 (A<0) means that A is a positive definite (negative definite). A-1 denotes the inverse of A . AT denotes the transpose of A . λmax... (A) and λmin... (A) denote the maximum and minimum eigenvalue of A , respectively. Given the vectors x=(x1 ,...,xn )T ,y=(y1 ,...,yn )T ∈...n , ||x||=(∑i=1n ...xi2)1/2 , xT y=∑i=1n ...xiyi . ||A|| denotes the 2-norm of A ; that is, ||A||=λ(AT A) , where λ(AT A) denotes the spectral radius of AT A . x (t) denotes the derivative of x(t) .
Given a set C⊂...n , K[C] denotes the closure of the convex hull of C .
Let V:...n [arrow right]... be a locally Lipschitz continuous function. Clarke's generalized gradient of V at x is defined by [figure omitted; refer to PDF] where ΩV ⊂...n is the set of Lebesgue measure zero, ∇V does not exist, and [physics M-matrix]⊂...n is an arbitrary set with measure zero. The set-valued map G(·) is said to have a closed (convex, compact) image if for each x∈E , G(x) is closed (convex, compact).
The remainder of this paper is organized as follows. In Section 2, the related preliminary knowledge are given, and the problem formulation and the neural network model are described. In Section 3, the stability in the sense of Lyapunov and finite-time convergence of the proposed neural network is proved. In Section 4, illustrative examples are given to show the effectiveness and the performance of the proposed neural network. Some conclusions are drawn in Section 5.
2. Model Description and Preliminaries
In this section, a one-layer recurrent neural network model is developed to solve pseudoconvex optimization with box constraints. Some definitions and properties concerning the set-valued map and nonsmooth analysis are also introduced.
Definition 1 (set-valued map).
Suppose that to each point x of a set E⊆Rn , there corresponds a nonempty set F(x)⊂Rn . Then x[arrow right]F(x) is said to be a set-valued map from E to Rn .
Definition 2 (locally Lipschitz function).
A function [varphi] : Rn [arrow right]R is called Lipschitz near x0 if and only if there exist [varepsilon] , ...>0 , such that for any x1 ,x2 ∈B(x0 ,...) , satisfying ||[varphi](x1 )-[varphi](x2 )||...4;[varepsilon]||x1 -x2 || , where B(x0 ,...)={x:||x-x0 ||<...} . The function [varphi] : Rn [arrow right]R is said to be locally Lipschitz in Rn if it is Lipschitz near any point x∈Rn .
Definition 3 (regularity).
A function [varphi] : Rn [arrow right]R , which is locally Lipschitz near x∈Rn , is said to be regular at x if there exists the one-sided directional derivative for any direction v∈Rn which is given by [varphi][variant prime] (x;v)=lim...ξ[arrow right]0+ ([varphi](x+ξ×v)-[varphi](x))/ξ , and we have [varphi]0 (x;v)=[varphi][variant prime] (x;v) . The function [varphi] is said to be regular in Rn if it is regular for any x∈Rn .
Definition 4.
A regular function f:Rn [arrow right]R is said to be pseudoconvex on a set Ω , if, for for all x, y∈Ω, x...0;y, γ(x)∈∂f(x) , we have [figure omitted; refer to PDF]
Definition 5.
A function F:Rn [arrow right]Rn is said to be pseudomonotone on a set Ω , if, for all x, x[variant prime] ∈Ω, x...0;x[variant prime] , we have [figure omitted; refer to PDF]
Consider the following optimization problem with box set constraint: [figure omitted; refer to PDF] where x,d,h∈Rn , and B∈Rn×n is nonsingular.
Substituting Bx with z , then the problem (4) can be transformed into the following problem: [figure omitted; refer to PDF] Let [figure omitted; refer to PDF] where d(zi ) is defined as [figure omitted; refer to PDF] Obviously, d(zi )...5;0 and D(z)...5;0 .
Throughout this paper, the following assumptions on the optimization problem (4) are made.
(A1 ) : The objective function f(x) of the problem (4) is pseudoconvex and regular and locally Lipschitz continuous.
(A2 ) : ∂f(x) is bounded; that is, [figure omitted; refer to PDF]
where lf >0 is a constant.
In the following, we develop a one-layer recurrent neural network for solving the problem (4). The dynamic equation of the proposed neural network model is described by differential inclusion system: [figure omitted; refer to PDF] where μ is a nonnegative constant, ∂f(B-1 z)=B-1 ∂f(x) , and g[d,h] is a discontinuous function with its components defined as [figure omitted; refer to PDF] Architecture of the proposed neural network system model (9) is depicted in Figure 1.
Figure 1: Architecture of the neural network model (9).
[figure omitted; refer to PDF]
Definition 6.
z ¯ ∈ R n is said to be an equilibrium point of the differential inclusion system (9) if [figure omitted; refer to PDF] that is, there exist [figure omitted; refer to PDF] such that [figure omitted; refer to PDF] where x¯=B-1 z¯ .
Definition 7.
A function z(·):[0,T][arrow right]Rn is said to be a solution of the system (9) with initial condition z(0)=z0 , if z(·) is absolutely continuous on [0,T] , and for almost t∈[0,T] , [figure omitted; refer to PDF] Equivalently, there exists measurable functions γ(x)∈∂f(x) , ξ(z)∈K[g[d,h] ](z) such that [figure omitted; refer to PDF]
Definition 8.
Suppose that B⊂Rn is a nonempty closed convex set. The normal cone to the set B at z∈B is defined as NB (z)={v∈Rn :vT (z-y)...5;0,for all y∈B} .
Lemma 9 (see [22]).
If [varphi]i :Rn [arrow right]R, i=1,2,...,m , is regular at x , then ∂(∑i=1m[varphi]i (x))=∑i=1m (∂[varphi]i (x)) .
Lemma 10 (see [22]).
If V:Rn [arrow right]R is a regular function at x and x(·):R[arrow right]Rn is differentiable at t and Lipschitz near t , then dV(x(t))/dt=Y9;ξ,x (t)YA; , for all ξ∈∂V(x(t)) .
Lemma 11 (see [22]).
If B1 ,B2 ⊂Rn are closed convex sets and satisfy 0∈int...(B1 -B2 ) , then for any z∈B1 ......B2 , NB1... ...B2 (z)=NB1 (z)+NB2 (z) .
Lemma 12 (see [22]).
If f is locally Lipschitz near z and attains a minimum over Ω at z , then 0∈∂f(z)+NΩ (z) .
Set Ω={z∈Rn :d...4;z...4;h} . Let s∈int...(Ω) ; then there exists a constant r>0 , such that Ω⊆B(s,r) , where int...(·) denotes the interior of the set Ω , B(s,r)={z∈Rn :||z-s||...4;r} . It is easy to verify the following lemma.
Lemma 13.
For any z∈Rn \Ω and ξ∈K[g[d,h] ](z) , (z-s)T ξ>ω , where ω=min...1...4;i...4;n {hi -si ,si -di } , and si is the i th element of s∈int...Ω .
3. Main Results
In this section, the main results concerned with the convergence and optimality conditions of the proposed neural network are addressed.
Theorem 14.
Suppose that the assumptions (A1 ) and (A2 ) hold. Let z0 ∈B(s,r) . If μ>(rlf /ω)||B-1 || , then the solution z(t) of the network system (9) with initial condition z(0)=z0 satisfies z(t)∈B(s,r) .
Proof.
Set [figure omitted; refer to PDF] By Lemma 10 and (15), evaluating the derivation of ρ(t) along the trajectory of the system (9) gives [figure omitted; refer to PDF] If z(t)∈Ω , it follows directly that z(t)∈B(s,r) . If z(t)∈B(s,r)\Ω , according to Lemma 13, one gets that (z(t)-s)T ξ(z(t))>ω . Thus, we have [figure omitted; refer to PDF] If μ>(rlf /ω)||B-1 || , then dρ(t)/dt<0 ; this means that z(t)∈B(s,r) . If not so, the state z(t) leaves B(s,r) at time t1 , and when t=t1 , we have ||z(t1 )-s||=r . This implies that (dρ(t)/dt)|t=t1 ...5;0 , which is the contradiction.
As a result, if μ>(rlf /ω)||B-1 || , for any z0 ∈B(s,r) , the state z(t) of the network system (9) with initial condition z(0)=z0 satisfies z(t)∈B(s,r) . This completes the proof.
Theorem 15.
Suppose that assumptions (A1 ) and (A2 ) hold. If μ>lf ||B-1 || , then the solution of neural network system (9) with initial condition z(0)=z0 ∈B(s,r) converges to the feasible region Ω in finite time T , where T=t* =D(z(0))/n(μ-lf ||B-1 ||) , and stays thereafter.
Proof.
According to the definition of D(z) , D(z) is a convex function in Rn . By Lemma 10, it follows that [figure omitted; refer to PDF] Noting that ∂D(z)=K[g[d,h] ](z) , we can obtain by (19) that for all z∈B(s,r)\Ω , ∃ γ(x)∈∂f(x) , and ξ(t)∈K[g[d,h] ](z) such that [figure omitted; refer to PDF] Since z∈Rn \Ω and ξ(t)∈K[g[d,h] ](z) , there at least is one of the components of ξ(t) which is -1 or 1 . So ||ξ(t)||...5;1 . Noting that ||ξ(t)||...4;n , we have [figure omitted; refer to PDF] Let α=n(μ-lf ||B-1 ||) . If μ>lf ||B-1 || , then α>0 , and [figure omitted; refer to PDF] Integrating (22) from 0 to t , we can obtain [figure omitted; refer to PDF] Let t* =(1/α)D(z(0)) . By (23), D(z(t* ))=0 ; that is, zi (t* )=hi or zi (t* )=di . This shows that the state trajectory of neural network (9) with initial condition z(0)=z0 reaches Ω in finite time T=t* =D(z(0))/n(μ-lf ||B-1 ||) .
Next, we prove that when t...5;t* , the trajectory stays in Ω after reaching Ω . If this is not true, then there exists t1 >t* such that the trajectory leaves Ω at t1 , and there exists t2 >t1 such that for t∈(t1 ,t2 ) , z(t)∈Rn \Ω .
By integrating (22) from t1 to t2 , it follows that [figure omitted; refer to PDF] Due to D(z(t1 ))=0 , D(z(t2 ))<0 . By the definition of D(z(t)) , D(z(t))...5;0 for any t∈[0,∞] , which contradicts the result above. The proof is completed.
Theorem 16.
Suppose that assumptions (A1 ) and (A2 ) hold. If μ>max...{lf ||B-1 ||,(rlf /ω)||B-1 ||} , then the equilibrium point of the neural network system (9) is an optimal solution of the problem (4) and vice versa.
Proof.
Denote z* as an equilibrium point of the neural network system (9); then there exist γ* ∈∂f(x* ) , ξ* ∈K[g[d,h] ](z* ) such that [figure omitted; refer to PDF] where γ* =B-1z* . By Theorem 15, z* ∈Ω ; hence, ξ* =0 . By (25), γ* =0 . We can get the following projection formulation: [figure omitted; refer to PDF] where [varphi][d,h] =([varphi][d1 ,h1 ] (y1 ),...,[varphi][dn ,hn ] (yn )) with [varphi][di ,hi ] (yi ) , i=1,...,n , defined as [figure omitted; refer to PDF] By the well-known projection theorem [17], (26) is equivalent to the following variational inequality: [figure omitted; refer to PDF] Since f(x) is pseudoconvex, f(B-1 z) is pseudoconvex on Ω . By (28), we can obtain that f(B-1 z)...5;f(B-1z* ) . This shows that z* is a minimum point of f(B-1 z) over Ω .
Next, we prove the reverse side. Denote z* as an optimal solution of the problem, then z* ∈[d,h] . Since z* is a minimum point of f(B-1 z) over the feasible region Ω , according to Lemma 12, it follows that [figure omitted; refer to PDF] From (29), it follows that there exist η* ∈NΩ (z* ) , B-1γ* =-η* ∈B-1 ∂f(x* ) , and ||η* ||...4;lf ||B-1 || . Noting that NΩ (z* )={vξ* :v...5;0, ξ* ∈K[g[d,h] ](z* ) , and at least one ξi* is 1 or -1} , there exist β...5;0 and ξ* ∈K[g[d,h] ](z* ) such that βξ* ∈NΩ (z* ) , and η* =βξ* .
In the following, we prove that η* ∈μK[g[d,h] ](z* ) . We say that β...4;μ . If not, then β>μ . Since (z* -s)Tξ* =∑i=1n (zi* -si )ξi ...5;ω , we have [figure omitted; refer to PDF] Thus ||η* ||...5;μω/||z* -s||>μω/r . By the condition of Theorem 16, μ>max...(lf ||B-1 ||,(rlf /ω)||B-1 ||) . Hence, ||η||>lf ||B-1 || , which contradicts with ||η||2 ...4;lf ||B-1 || . This implies that βξ* ∈μK[g[d,h] ](z* ) ; that is, 0∈∂F(z* )+μK[g[d,h] ](z* ) which means that z* is the equilibrium point of neural network system (9). This completes the proof.
Theorem 17.
Suppose that assumptions (A1 ) and (A2 ) hold. If μ>max...{lf ||B-1 ||,(rlf /ω)||B-1 ||} , then the equilibrium point of the neural network system (9) is stable in the sense of Lyapunov.
Proof.
Denote z* as an equilibrium point of the neural network system (9); that is, [figure omitted; refer to PDF] By Theorem 16, we get that z* is an optimal solution of the problem (4); that is, f(B-1 z)...5;f(B-1z* ),z∈Ω . By Theorem 15, the trajectory z(t) with initial condition z(0)=z0 ∈B(s,r) converges to the feasible region Ω in finite time T=t* =D(z(0))/n(μ-lf ||B-1 ||) and will remain in Ω forever. That is, for all t...5;t* , z(t)∈Ω . Let [figure omitted; refer to PDF] Since z* is a minimum point of f(B-1 z) on Ω , we can get that V1 (z)...5;V1 (z* ) , for all z∈Ω .
Consider the following Lyapunov function: [figure omitted; refer to PDF] Obviously, from (33), V(z)...5;(1/2)||z-z* ||2 , and [figure omitted; refer to PDF] Evaluating the derivation of V along the trajectory of the system (9) gives [figure omitted; refer to PDF] Since z (t)∈-∂V1 (z(t)) , by (34), we can set ξ(t)=-z (t)+z-z* . Hence, [figure omitted; refer to PDF] For any θ∈∂V1 (z) , there exists γ∈∂f(x), ξ∈K[g[d,h] ](z) such that θ=B-1 γ+μξ . Since f(x) is pseudoconvex on Ω , ∂f(x) is pseudomonotone on Ω . From the proof of Theorem 16, for any z∈Ω , (z-z*)TB-1 γ...5;0 . By the definition of g[d,h] (z) , (z-z*)T ξ...5;0 . Hence, (z-z*)T θ...5;0 . This implies that [figure omitted; refer to PDF] Equation (37) shows that the neural network system (9) is stable in the sense of Lyapunov. The proof is complete.
4. Numerical Examples
In this section, two examples will be given to illustrate the effectiveness of the proposed approach for solving the pseudoconvex optimization problem.
Example 1.
Consider the quadratic fractional optimization problem: [figure omitted; refer to PDF] where Q is an n×n matrix, a,c∈Rn , and a0 ,c0 ∈R . Here, we choose n=3 , [figure omitted; refer to PDF]
It is easily verified that Q is symmetric and positive definite and consequently is pseudoconvex on Ω={d...4;Bx...4;h} . The proposed neural network in (9) is capable of solving this problem. Obviously, neural network (9) associated to (38) can be described as [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
Let x^={-1,-1,-1} , s=Bx^∈int...(Ω) , then we have ω=3 . Moreover, the restricted region [d,h]⊂B(s,r) with r=23 . An upper bound of ∂f(x) is estimated as lf =3 . Then the designed parameter μ is estimated as μ>10.32 . Let μ=11 in the simulation.
We have simulated the dynamical behavior of the neural network by using the mathematical software when μ=11 . Figures 2, 3, and 4 display the state trajectories of this neural network with different initial values, which shows that the state variables converge to the feasible region in finite time. This is in accordance with the conclusion of Theorem 15. Meanwhile, it can be seen that the trajectory is stable in the sense of Lyapunov.
Figure 2: Time-domain behavior of the state variables x1 , x2 , and x3 with initial point x0 =(0,0,0.3) .
[figure omitted; refer to PDF]
Figure 3: Time-domain behavior of the state variables x1 , x2 , and x3 with initial point x0 =(-0.2,0,0.4) .
[figure omitted; refer to PDF]
Figure 4: Time-domain behavior of the state variables x1 , x2 , and x3 with initial point x0 =(0.1,0,0.3) .
[figure omitted; refer to PDF]
Example 2.
Consider the following pseudoconvex optimization: [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
In this problem, the objective function f(x) is pseudoconvex. Thus the proposed neural network is suitable for solving the problem in this case. Neural network (9) associated to (42) can be described as [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
Let x^={0,1,0} , s=Bx^∈int...(Ω) ; then we have ω=2 . Moreover, the restricted region [d,h]⊂B(s,r) with r=13 . An upper bound of ∂f(x) is estimated as lf =46 . Then the designed parameter μ is estimated as μ>29.4 . Let μ=30 in the simulation.
Figures 5 and 6 display the state trajectories of this neural network with different initial values. It can be seen that these trajectories converge to the feasible region in finite time as well. This is in accordance with the conclusion of Theorem 15. It can be verified that the trajectory is stable in the sense of Lyapunov.
Figure 5: Time-domain behavior of the state variables x1 , x2 , and x3 with initial point x0 =(5.4,6.5,-1.5) .
[figure omitted; refer to PDF]
Figure 6: Time-domain behavior of the state variables x1 , x2 , and x3 with initial point x0 =(-16.3,19.6,-2.3) .
[figure omitted; refer to PDF]
5. Conclusion
In this paper, a one-layer recurrent neural network has been presented for solving pseudoconvex optimization with box constraint. The neural network system model has been described with a differential inclusion system. The constructed recurrent neural network has been proved to be stable in the sense of Lyapunov. The conditions which ensure the finite time state convergence to the feasible region have been obtained. The proposed neural network can be used in a wide variety to solve a lot of optimization problem in the engineering application.
Acknowledgments
This work was supported by the Natural Science Foundation of Hebei Province of China (A2011203103) and the Hebei Province Education Foundation of China (2009157).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] Q. Liu, Z. Guo, J. Wang, "A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization," Neural Networks , vol. 26, pp. 99-109, 2012.
[2] W. Lu, T. Chen, "Dynamical behaviors of delayed neural network systems with discontinuous activation functions," Neural Computation , vol. 18, no. 3, pp. 683-708, 2006.
[3] Y. Xia, J. Wang, "A recurrent neural network for solving nonlinear convex programs subject to linear constraints," IEEE Transactions on Neural Networks , vol. 16, no. 2, pp. 379-386, 2005.
[4] X.-B. Liang, J. Wang, "A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints," IEEE Transactions on Neural Networks , vol. 11, no. 6, pp. 1251-1262, 2000.
[5] X. Xue, W. Bian, "A project neural network for solving degenerate convex quadratic program," Neurocomputing , vol. 70, no. 13-15, pp. 2449-2459, 2007.
[6] Y. Xia, H. Leung, J. Wang, "A projection neural network and its application to constrained optimization problems," IEEE Transactions on Circuits and Systems I , vol. 49, no. 4, pp. 447-458, 2002.
[7] S. Qin, X. Xue, "Global exponential stability and global convergence in finite time of neural networks with discontinuous activations," Neural Processing Letters , vol. 29, no. 3, pp. 189-204, 2009.
[8] S. Effati, A. Ghomashi, A. R. Nazemi, "Application of projection neural network in solving convex programming problems," Applied Mathematics and Computation , vol. 188, no. 2, pp. 1103-1114, 2007.
[9] X. Xue, W. Bian, "Subgradient-based neural networks for nonsmooth convex optimization problems," IEEE Transactions on Circuits and Systems I , vol. 55, no. 8, pp. 2378-2391, 2008.
[10] X. Hu, J. Wang, "A recurrent neural network for solving a class of general variational inequalities," IEEE Transactions on Systems, Man, and Cybernetics B , vol. 37, no. 3, pp. 528-539, 2007.
[11] X. Hu, J. Wang, "Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network," IEEE Transactions on Neural Networks , vol. 17, no. 6, pp. 1487-1499, 2006.
[12] J. Wang, "Analysis and design of a recurrent neural network for linear programming," IEEE Transactions on Circuits and Systems I , vol. 40, no. 9, pp. 613-618, 1993.
[13] J. Wang, "A deterministic annealing neural network for convex programming," Neural Networks , vol. 7, no. 4, pp. 629-641, 1994.
[14] J. Wang, "Primal and dual assignment networks," IEEE Transactions on Neural Networks , vol. 8, no. 3, pp. 784-790, 1997.
[15] J. Wang, "Primal and dual neural networks for shortest-path routing," IEEE Transactions on Systems, Man, and Cybernetics A , vol. 28, no. 6, pp. 864-869, 1998.
[16] W. Bian, X. Xue, "Neural network for solving constrained convex optimization problems with global attractivity," IEEE Transactions on Circuits and Systems I , vol. 60, no. 3, pp. 710-723, 2013.
[17] W. Bian, X. Xue, "A dynamical approach to constrained nonsmooth convex minimization problem coupling with penalty function method in Hilbert space," Numerical Functional Analysis and Optimization , vol. 31, no. 11, pp. 1221-1253, 2010.
[18] Y. Xia, J. Wang, "A one-layer recurrent neural network for support vector machine learning," IEEE Transactions on Systems, Man, and Cybernetics B , vol. 34, no. 2, pp. 1261-1269, 2004.
[19] X. Xue, W. Bian, "Subgradient-based neural networks for nonsmooth convex optimization problems," IEEE Transactions on Circuits and Systems I , vol. 55, no. 8, pp. 2378-2391, 2008.
[20] Y. Xia, H. Leung, J. Wang, "A projection neural network and its application to constrained optimization problems," IEEE Transactions on Circuits and Systems I , vol. 49, no. 4, pp. 447-458, 2002.
[21] S. Qin, W. Bian, X. Xue, "A new one-layer recurrent neural network for non-smooth pseudoconvex optimization," Neurocomputing , vol. 120, pp. 655-662, 2013.
[22] F. H. Clarke Optimization and Nonsmooth Analysis , Wiley, New York, NY, USA, 1983.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Huaiqin Wu et al. Huaiqin Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
A one-layer recurrent neural network is developed to solve pseudoconvex optimization with box constraints. Compared with the existing neural networks for solving pseudoconvex optimization, the proposed neural network has a wider domain for implementation. Based on Lyapunov stable theory, the proposed neural network is proved to be stable in the sense of Lyapunov. By applying Clarke's nonsmooth analysis technique, the finite-time state convergence to the feasible region defined by the constraint conditions is also addressed. Illustrative examples further show the correctness of the theoretical results.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer