1. Introduction
Multiobjective optimization is the method of optimizing two or more real valued objective functions at the same time. There is no ideal minimizer to minimize all objective functions at once, thus the optimality concept is replaced by the idea of Pareto optimality/efficiency. A point is called Pareto optimal or efficient if there does not exist an alternative point with the equivalent or smaller objective function values, such that there is a decrease in at least one objective function value. In many applications such as engineering [1,2], economic theory [3], management science [4], machine learning [5,6], and space exploration [7], etc., several multiobjective optimization techniques are used to make the desired decision. One of the basic approaches is the weighting method [8], where a single objective optimization problem is created by the weighting of several objective functions. Another approach is theϵ -constraint method [9], where we minimize only the chosen objective function and keep other objectives as constraints. Some multiobjective algorithms require a lexicographic method, where all objective functions are optimized in their order of priority [10,11]. First, the most preferred function is optimized, then that objective function is transformed into a constraint and a second priority objective function is optimized. This approach is repeated until the last objective function is optimized. The user needs to choose the sequence of objectives. Two distinct lexicographic optimizations with distinct sequences of objective functions do not produce the same solution. The disadvantages of such approaches are the choice of weights, constraints, and importance of the functions, respectively, which are not known in advance and they have to be specified from the beginning. Some other techniques [12,13,14] that do not need any prior information are developed for solving unconstrained multiobjective optimization problems (UMOP) with at most linear convergence rate. Other methods like heuristic approaches or evolutionary approaches [15] provide an approximate Pareto front but do not guarantee the convergence property.
Newton’s method [16] that solves the single-objective optimization problems is extended for solving (UMOP), which is based on an a priori parameter-free optimization method [17]. In this case, the objective functions are twice continuously differentiable, no other parameter or ordering of the functions is needed, and each objective function is replaced with a quadratic model. The rate of convergence is observed as superlinear, and it is quadratic if the second-order derivative is Lipschitz continuous. Newton’s method is also studied under the assumptions of Banach and Hilbert spaces for finding the efficient solutions of (UMOP) [18]. A new type of Quasi-Newton algorithm is developed to solve the nonsmooth multiobjective optimization problems, where the directional derivative of every objective function exists [19].
A necessary condition for finding the vector critical point of (UMOP) is introduced in the steepest descent algorithm [12], where neither weighting factors nor ordering information for the different objective functions are assumed to be known. The relationship between critical points and efficient points is discussed in [17]. If the domain of (UMOP) is a convex set and the objective functions are convex component-wise then every critical point is the weak efficient point, and if the objective functions are strictly convex component-wise, then every critical point is the efficient point. The new classes of vector invex and pseudoinvex functions for (UMOP) are also characterized in terms of critical points and (weak) efficient points [20] by using Fritz John (FJ) optimality conditions and Karush–Kuhn–Tucker (KKT) conditions. Our focus is on Newton’s direction for a standard scalar optimization problem which is implicitly induced by weighting the several objective functions. The weighting values are a priori unknown and non-negative KKT multipliers, that is, they are not required to fix in advance. Every new point generated by the Newton algorithm [17] initiates such weights in the form of KKT multipliers.
Quantum calculus or q-calculus is also called calculus without limits. The q-analogues of mathematical objects can be again recaptured asq→1 . The history of quantum calculus can be traced back to Euler (1707–1783), who first proposed the quantum q in Newton’s infinite series. In recent years, many researchers have shown considerable interest in examining and exploring the quantum calculus. Therefore, it emerges as an interdisciplinary subject. Of course, the quantum analysis is very useful in numerous fields such as in signal processing [21], operator theory [22], fractional integral and derivatives [23], integral inequalities [24], variational calculus [25], transform calculus [26], sampling theory [27], etc. The quantum calculus is seen as the bridge between mathematics and physics. To study some recent developments in quantum calculus, interested researches should refer to [28,29,30,31].
The q-calculus was first studied in the area of optimization [32], where the q-gradient is used in steepest descent method to optimize objective functions. Further, global optimum was searched using q-steepest descent method and q-conjugate gradient method where a descent scheme is presented using q-calculus with the stochastic approach which does not focus on the order of convergence of the scheme [33]. The q-calculus is applied in Newton’s method to solve unconstrained single objective optimization [34]. Further, this idea is extended to solve (UMOP) within the context of the q-calculus [35].
In this paper, we present the q-calculus in Quasi-Newton’s method for solving (UMOP). We approximate the second q-derivative matrices instead of evaluating them. Using q-calculus, we present the convergence rate is superlinear.
The rest of this paper is organized as follows. Section 2 recalls the problem, notation, and preliminaries. Section 3 derives a q-Quasi-Newton direction search method solved by (KKT) conditions. Section 4 establishes the algorithms for convergence analysis. The numerical results are given in Section 5 and the conclusion is in the last section.
2. Preliminaries
DenoteRas the set of real numbers,Nas the set of positive integers, andR+or(R−)as the set of strictly positive or (negative) real numbers. If a function is continuous on any interval excluding zero, then the function is called continuous q-differentiable. For a functionf:R→R , the q-derivative of f [36] denoted asDq,xf, is given as
Dq,xf(x)=f(x)−f(qx)(1−q)x,x≠0,q≠1f′(x),x=0.
Supposef:Rn→R, whose partial derivatives exist. Forx∈Rn, consider an operatorϵq,ion f as
(εq,i)f(x)=f(x1,x2,…,qxi,xi+1,…,xn).
The q-partial derivative of f at x with respect toxi, indicated byDq,xif , is [23]:
Dq,xif(x)=f(x)−(εq,if)(x)(1−q)xi,xi≠0,q≠1,∂f∂xi,xi=0.
We are interested to solve the following (UMOP):
minimizeF(x)subjecttox∈X,
whereX⊆Rnis a feasible region andF:X→Rm. Note that the functionF=(f1,f2,…,fm)is a vector function whose components are real valued functions such asfj:X→R,wherej=1,…,m.In general, n and m are independent. Forx,y∈Rn, we present the vector inequalities as:
x=y⇔xi=yi;∀i=1,…,n,x≧y⇔xi≥yi∀i=1,…,n,x≥y⇔xi≥yiandx≠y,x>y⇔xi>yi∀i=1,…,n.
A pointx*∈Xis called Pareto optimal point such that there is no any pointx∈X,for whichF(x)≤F(x*),andF(x)≠F(x*).A pointx*∈Xis called weakly Pareto optimal point if there is nox∈Xfor whichF(x)<F(x*).Similarly, a pointx*∈Xis a local Pareto optimal if there exists a neighborhoodY⊆Xofx*such that the pointx*is a Pareto optima for F restricted on Y. Similarly, a pointx*is a local weak Pareto optima if there exists a neighborhoodY⊆Xofx*such that the pointx*is a weak Pareto optimal for F restricted on Y. The matrixJF(x)∈Rm×nis the Jacobian matrix offjat x, i.e., the j-th row ofJF(x)is∇q fj(x)(q-gradient) for allj=1,…,m.LetWfj(x)be the Hessian matrix offjat x for allj=1,…,m. Note that every Pareto optimal point is a weakly Pareto optimal point [37]. The directional derivative offjat x in the descent directiondqis given as:
fj′(x,dq)=limα→0fj(x+αdq)−fj(x)α
The necessary condition to get the critical point for multiobjective optimization problems is given in [17]. For anyx∈Rn,∥x∥denotes the Euclidean norm inRn. LetK(x0,r)={x:∥x−x0∥≤r}with a centerx0∈Rnand radiusr∈R+.Norm of the matrixA∈Rn×nis∥A∥=maxx∈Rn×n∥Ax∥∥x∥,x≠0.The following proposition indicates that whenf(x)is a linear function, then the q-gradient is similar to the classical gradient.
Proposition 1
([33]). Iff(x)=a+pTx, wherea∈Randp∈Rn, then for anyx∈Rn, andq∈(0,1), we have∇qf(x)=∇f(x)=p.
All the quasi-Newton methods approximate the Hessian of function f asWk∈Rn×n , and update the new formula based on previous approximation [38]. Line search methods are imperative methods for (UMOP) in which a search direction is first computed and then along this direction a step-length is chosen. The entire process is an iterative.
3. Theq-Quasi-Newton Direction for Multiobjective
The most well-known quasi-Newton method for single objective function is the BFGS (Broyden, Fletcher, Goldfarb, and Shanno) method. This is a line search method along with a descent directiondqkwithin the context of q-derivative, given as:
dqk=−(Wk)−1 ∇qf(xk),
where f is a continuously q-differentiable function, andWk∈Rn×nis a positive definite matrix that is updated at every iteration. The new point is:
xk+1=xk+αk dqk.
In the case of the Steepest Descent method and Newton’s method,Wkis taken to be an Identity matrix and exact Hessian of f, respectively. The quasi-Newton BFGS scheme generates the nextWk+1as
Wk+1=Wk−Wk sk (sk)T Wk(sk)T Wk sk+yk (yk)T(sk)T yk,
wheresk=xk+1−xk=αk dqk, andyk=∇qf(xk+1)−∇qf(xk). In Newton’s method, second-order differentiability of the function is required. While calculatingWk, we use q-derivative which behaves like a Hessian matrix off(x).Wk+1 may not be a positive definite, which can be modified to be a positive definite through the symmetric indefinite factorization [39]. The q-Quasi-Newton’s directiondq(x) is an optimal solution of the following modified problem [40] as:
mindq∈Rnmaxj=1,…,m∇q fj(x)dq+12dqT Wj (x)T dq,
whereWj(x) is computed as (8). The solution and optimal value of (9) are:
ψ(x)=mindq∈Rnmaxj=1,…,m∇q fj (x)T dq+12dqT Wj(x)dq,
and
dq(x)=argmindq∈Rnmaxj=1,…,m∇fj (x)T dq+12dqT Wj(x)dq.
The problem (9) becomes a convex quadratic optimization problem (CQOP) as follows:
minimizeh(t,dq)=t,subjectto∇q fj (x)T dq+12dqT Wj(x)dq−t≤0,j=1,…,m,where(t,dq)∈R×Rn.
The Lagrangian function of (CQOP) is:
L((t,dq),λ)=t+∑j=1mλj∇q fj (x)T dq+12dqT Wj(x)dq−t.
Forλ=(λ1,λ2,…,λm)T , we obtain the following (KKT) conditions [40]:
∑j=1mλj∇q fj(x)+Wj(x)dq=0,
λj≥0,j=1,…,m,
∑j=1mλj=1,
∇q fj (x)T dq+12dqT Wj(x)dq≤t,j=1,…,m,
λj∇q fj (x)T dq+12dqT Wj(x)dq−t=0,j=1,…,m.
The solution(dq(x),ψ(x))is unique, and setλj=λj(x)for allj=1,…,mwithdq=dq(x)andt=ψ(x)for satisfying (14)–(18). From (14), we obtain
dq(x)=−∑j=1mλj(x)Wj(x)−1∑j=1mλj(x)∇q fj(x).
This is a so-called q-Quasi-Newton’s direction for solving (UMOP). We present the basic result for relating the stationary condition at a given point x to its q-Quasi-Newton directiondq(x)and function ψ.
Proposition 1.
Letψ:X→Randdq:X→Rn be given by (10) and (11), respectively, andWj(x)≥0for allx∈X. Then,
1.
ψ(x)≤0for allx∈X.
2. The conditions below are equivalent:
(a) The point x is non stationary.
(b)
dq(x)≠0
(c)
ψ(x)<0.
(d)
dq(x)is a descent direction.
3. The function ψ is continuous.
Proof.
Sincedq=0, then from (10), we have
ψ(x)≤mindq∈Rnmaxj=1,…,m∇q fj (x)T0+12dqT Wj(x)0=0,
thusψ(x)≤0.It means thatJF(x*)dq(x)∈R−m. Thus, the given pointx∈Rnis non-stationary. SinceWj(x) is positive definite, and from (10) and (11), we have
∇q fj (x)T dq(x)<∇fj (x)T dq(x)+12dq (x)T Wj (x)T dq(x)=ψ(x)≤0.
Sinceψ(x)is the optimal value of (CQOP), and it is negative, thus solution of (CQOP) can never bedq(x)=0. It is sufficient to show that the continuity [41] ofψin setY⊂X. Sinceψ(x)≤0,then
∇q fj (x)T dq(x)≤−12dq (x)T Wj(x)dq(x),
for allj=1,…,m,andWj(x),wherej=1…,mare positive definite for allx∈Y. Thus, the eigenvalues of Hessian matricesWj(x), wherej=1,…,mare uniformly bounded away from zero on Y so there existsR,S∈R+such that
R=maxx∈Y,j=1,…,m∥∇q fj(x)∥,
and
S=minx∈Y,∥e∥=1,j=1,…,meT Wj(x)e.
From (20) and using Cauchy–Schwarz inequality, we get
∥∇q fj(x)∥∥dq(x)∥≤12S∥dq(x)∥2≤R∥dq(x)∥,
that is,
dq(x)≤2RS,
for allx∈Y, that is, Newton’s direction is uniformly bounded on Y. We present the family of function{ℵx,j}x∈Y,j=1,…,m, where
ℵx,j:Y→R,
and
z→∇qf(z)T dq(x)+12dq (x)T Wj(x)dq(x).
We shall prove that this family of functions is uniformly equicontinuous. For small valueϵz∈R+there existsδz∈R+, and fory∈K(z,δz), we have
∥Wj(y)−∇q2 fj(z)∥<ϵz2,
and
∥∇q2 fj(y)−∇q2 fj(z)∥<ϵz2,
for allj=1,…,m. because of q-continuity of Hessian matrices, the second inequality is true. Since Y is compact space, then there exists a finite sub-cover.
ψx,j(z)=∇q fj (z)T dq(x)+12dq (x)T Wj(x)dq(x),
that is
ψx,j(z)=∇q fj (z)T dq(x)+12dq (x)T ∇2 fj(z)dq(x)+12dq (x)T(Wj(z)−∇q2 fj(z)dq(x)).
To show the q-continuous of last term, sety1,y2∈Ysuch that∥y1−y2∥<δfor smallδ∈R+, then
|12dq (x)T Wj(y1)−∇q2 fj(y1)dq(x)−12dq (x)T Wj(y2)−∇q2 fj(y2)dq(x)|≤12∥dq(x)∥2(∥Bj(y1)−∇2 fj(z1))∥+∥∇q2 fj(z2)−∇q2 fj(z1+∥Bj(y2)−∇2 fj(z21))∥+∥∇q2 fj(z2)−∇q2 fj(z21∥)≤12∥dq(x)∥2(εz1+εz2).
ψx,j is uniformly continuous [40] for allx∈Yand for allj=1,…,m.There existsδ∈R+such that for ally,z∈Y,∥y−z∥<δimplies|ψ(y)−ψ(z)|<ϵfor allx∈Y. Thus,∥y−z∥<δ.
ψ(z)≤maxj=1,…,m∇fj (z)T dq(y)+12dq (y)T Wj(z)dq(y)=ϕy(z)≤ϕy(y)+|ϕy(z)−ϕy(y)|<ψ(y)+ϵ.
Thus,ψ(z)−ψ(y)<ϵ. If we interchange y and z, then|ψ(z)−ψ(y)|<ϵ. It proves the continuity ofψ.□
The following modified lemma is due to [17,42].
Lemma 1.
LetF:Rn→Rmbe continuously q-differentiable. Ifx*∈Xis not a critical point for∇q(x)dq<0, wheredq∈Rn,σ∈(0,1], andε>0. Then,
x+αdq(x)∈XandF(x+αdq(x))<F(x)+αγψ(x),
for anyα∈(0,σ]andγ∈(0,ε].
Proof.
Sincex*is not a critical point, thenψ(x)<0. Letr>0such thatB(x,r)⊂Xandα∈(0,σ].Therefore,
F(x+αdq(x))−F(x)=α∇qF(x)T dq(x)+oj(αdq(x),x)
Since∇q(x)dq(x)<ψ(x), forα∈(0,σ], then
F(x+αdq(x))−F(x)=αγψ(x)+α(1−σ)ψ(x)+oj(αdq(x),x).
The last term in the right-hand side of the above equation is non-positive becauseψ(x)≤ψ(x*)2<0,for α∈[0,σ]. □
4. Algorithm and Convergence Analysis
We first present the following Algorithm 1 [43] to find the gradient of the function using q-calculus. The higher-order q-derivative of f can be found in [44].
Algorithm 1q-Gradient Algorithm |
1: Inputq∈(0,1),f(x),x∈R,z. 2: ifx=0then 3: Setg←limf(z)−f(q*z)(z−q*z),z,0. 4: else 5: Setg←f(x)−f(q*x)(x−q*x). 6: Print∇qf(x)←g. |
Example 1.
Given thatf:R2→Rdefined byf(x1,x2)=x22+3x13.Then∇qf(x)=3x12(1+q+q2)x2(1+q).
We are now prepared to write the unconstrained q-Quasi-Newton’s Algorithm 2 for solving (UMOP). At each step, we solve the (CQOP) to find the q-Quasi-Newton direction. Then, we obtain the step length using the Armijo line search method. In every iteration, the new point and Hessian approximation are generated based on historical values.
Algorithm 2q-Quasi-Newton’s Algorithm for Unconstrained Multiobjective (q-QNUM) |
1: Chooseq∈(0,1),x0∈X,symmetric definite matrixW0∈Rn×n,c∈(0,1), and a small tolerance valueϵ>0. 2: for k=0,1,2,... do 3: Solve (CQOP). 4: Computedqkandψk. 5: ifψk>−ϵthen 6: Stop. 7: else 8: Chooseαkas theα∈(0,1]such thatxk+αdqk∈XandF(xk+αdqk)≤F(xk)+cαψk. 9: Updatexk+1←xk+αk dqk. 10: UpdateWjk+1, wherej=1,…,m using (8). |
We now finally start to show that every sequence produced by the proposed method converges to a weakly efficient point. It does not matter how poorly the initial point is guessed. We assume that the method does not stop, and produces an infinite sequence of iterates. We now present the modified sufficient conditions for the superlinear convergence [17,40] within the context of q-calculus.
Theorem 1.
Let{xk}be a sequence generated by (q-QNUM), andY⊂Xbe a convex set. Also,γ∈(0,1)andr,a,b,δ,ϵ>0,and
(a)
aI≤Wj(x)≤bIfor allx∈Y,j=1,…,m,
(b)
∥∇q2 fj(y)−∇q2 fj(x),∥<ε2for allx,y∈Ywith∥y−x∥∈δ,
(c)
∥(Wjk−∇q2 fj(xk))(y−xk)∥<ϵ2∥y−xk∥for allk≥k0,y∈Y,j=1,…,m,
(d)
εa≤1−c,
(e)
B(x0,r)∈Y,
(f)
∥dq(x0)∥<min{δ,r(1−εa)}.
Then, for allk≥k0, we have that
1.
∥xk−xk0 ∥≤∥dq(x0)∥1−(εa)k−k01−(εa)
2.
αk=1,
3.
∥dq(xk)∥≤∥dq(xk0 )∥(εa)k−k0,
4.
∥dq(xk+1)∥≤∥dq(xk)∥εa.
Then, the sequence{xk}converges to local Pareto pointsx*∈Rm, and the convergence rate is superlinear.
Proof.
From part 1, part 3 of this theorem and triangle inequality,
∥xk+dq(xk)−x0∥≤1−ϵak+11−ϵa∥dq(xk0 )∥.
From (d) and (f), we followxk,xk+dq(xk)∈K(xk0 ,r)andxk+dq(xk)−xk<δ. We also have
fj(xk+dq(xk))≤fj(xk)+dq(xk)∇qf(xk)+12dq(xk)(∇q2f)(xk)+12∥dq(xk)∥2,
that is,
fj(xk+dq(xk))≤fj(xk)+ψ(xk)+ε2∥dq(xk)∥2=fj(xk)+γψ(xk)+(1−γ)ψ(xk)+ε2∥dq(xk)∥2.
Sinceψ≤0and(1−γ)ψ(xk)+ϵ2∥dq(xk)∥2≤(ε−a(1−γ))∥dq(xk)∥22≤0,we get
fj(xk+dq(xk))≤fj(xk)+γψ(xk),
for allj=1,…,m.The Armijo conditions holds forαk=1. Part 1 of this theorem holds. We now setxk,xk+1∈K(xk0 ,r), and∥xk+1−xk∥<δ. Thus, we getxk+1=xk+dq(xk). We now definev(xk+1)=∑j=1m λjk ∇q fj(xk+1).Therefore,
|ψ(xk+1)|≤12a∥v(xk+1)∥2.
We now estimate∥v(xk+1)∥. Forx∈X, we define
Gk(x):=∑j=1mλjk fj(xk+1),
and
Hk=∑j=1mλjk Wj(xk),
whereλjk≥0,for allj=1,…,m,are KKT multipliers. We obtain following:
∇q Gk(x)=∑j=1mλjk ∇q fj(x),
and
∇q2 Gk(x)=∑j=1mλjk ∇q2 fj(x).
Then,vk+1=∇q Gk(xk+1). We get
dq(xk)=−(Hk)−1 ∇q Gk(xk).
From assumptions (b) and (c) of this theorem,
∥∇q2 Gk(y)−∇q2 Gk(xk)∥<ε2,
∥(Hk−∇q2 Gk(xk))(y−xk)∥<ε2∥y−xk∥
hold for allx,y∈Ywith∥y−x∥<δandk≥k0. We have
∇q Gk(xk+dq(xk))−(∇q Gk(xk)+Hk dq(xk))∥<ϵ∥dq(xk)∥.
Since∇q Gk(xk)+Hk dq(xk)=0, then
∥v(xk+1)∥=∥∇q Gk(xk+1)∥<ϵ∥dq(xk)∥,
and
|ψk+1|≤12a∥v(xk+1)∥2<ϵ22a∥dq(xk)∥2.
We have
a2∥dq(xk+1)∥2<ϵ22a∥dq(xk)∥2.
Thus,
∥dq(xk+1)∥<ϵa∥dq(xk)∥
Thus, part 4 is proved. We finally prove superlinear convergence of{xk}. First we define
rk=∥dq0∥εak−k01−εa,
and
δk=∥dqk0 ∥εak−k0.
From triangle inequality, assumptions (e), (f) and part 1, we haveK(xk,rk)⊂K(xk0 ,r)⊂V.Choose anyτ∈R+, and define
ε¯=min{aτ1+2τ,ε}.
Fork≥k0inequalities
∥∇q2 fj(y)−∇q2 fj(x)∥<ε¯2
for allx,y∈K(xk,rk)with∥y−x∥<δk,and
∥Wj(xl)−∇q2 fj(xl)(y−xl)∥<ε¯2
for ally∈K(xk,rk)andl≥kholds both forj=1,…,m.Assumptions (a)–(f) are satisfied forε¯,rk,δk,andxkinstead ofϵ,r,δ,andx0, respectively. We have
∥xl−xk∥≤∥dq(xk)∥1−(ε¯a)l−k1−ε¯a.
Letl→∞and we get∥x*−xk∥≤∥dq(xk)∥11−ε¯a.Using the last inequality, and part 4, we have
∥x*−xk+1∥≤∥dq(xk+1)∥11−ε¯a≤∥dq(xk)∥ε¯a1−ϵ¯a.
From above and triangle inequality, we have
∥x*−xk+1∥≥∥xk+1−xk∥−∥x*−xk+1∥,
that is,
∥x*−xk+1∥≥∥dq(xk)∥−∥dk∥ε¯1−ε¯a=∥dqk∥1−2ε¯a1−ε¯a.
Since1−2ε¯a>0,and1−2ε¯a>0, then we get
∥x*−xk+1∥≤τ∥x*−xk∥,
whereτ∈R+is chosen arbitrarily. Thus, the sequence{xk}converges superlinearly tox*.□
5. Numerical Results
The proposed algorithm (q-QNUM), i.e., Algorithm 2, presented in Section 4 is implemented in MATLAB (2017a) and tested on some test problems known from the literature. All tests were run under the same conditions. The box constraints of the formlb≤x≤ubare used for each test problem. These constraints are considered under the direction search problem (CQOP) such that the newly generated point always lies in the same box, that is,lb≤x+dq≤ubholds. We use the stopping criteria atxkas:ψ(xk)>−ϵwhereϵ∈R+ . All test problems given in Table 1 are solved 100 times. The starting points are randomly chosen from a uniform distribution betweenlbandub . The first column in the given table is the name of the test problem. We use the abbreviation of author’s names and number of the problem in the corresponding paper. The second column indicates the source of the paper. The third column is for lower bound and upper bound. We compare the results of (q-QNUM) with (QNMO) of [40] in the form of a number of iterations (iter), number of objective functions evaluation (obj), and number of gradient evaluations (grad ), respectively. From Table 1, we can conclude that our algorithm shows better performance.
Example 2.
Find the approximate Pareto front using (q-QNUM) and (QNMO) for the given (UMOP) [45]:
Minimizef1(x1,x2)=(x1−1)2+(x1−x2)2,Minimizef2(x1,x2)=(x2−3)2+(x1−x2)2,
where−3≤x1,x2≤10.
The number of Pareto points generated due to (q-QNUM) with Algorithm 1 and (QNMO) is shown in Figure 1. One can observe that the number of iterations asiter=200in (q-QNUM) anditer=525in (QNMO) are responsible for generating the approximate Pareto front of above (UMOP).
6. Conclusions The q-Quasi-Newton method converges superlinearly to the solution of (UMOP) if all objective functions are strongly convex within the context of q-derivative. In a neighborhood of this solution, the algorithm uses a full Armijo steplength. The numerical performance of the proposed algorithm is faster than their actual evaluation.
Problem | Source | [lb,ub] | (q-QNUM) | (QNMO) | ||||
---|---|---|---|---|---|---|---|---|
iter | obj | grad | iter | obj | grad | |||
BK1 | [46] | [−5, 10] | 200 | 200 | 200 | 200 | 200 | 200 |
MOP5 | [46] | [−30, 30] | 141 | 965 | 612 | 333 | 518 | 479 |
MOP6 | [46] | [0, 1] | 250 | 2177 | 1712 | 181 | 2008 | 2001 |
MOP7 | [46] | [−400, 400] | 200 | 200 | 200 | 751 | 1061 | 1060 |
DG01 | [47] | [−10, 13] | 175 | 724 | 724 | 164 | 890 | 890 |
IKK1 | [47] | [−50, 50] | 170 | 170 | 170 | 253 | 254 | 253 |
SP1 | [45] | [−3, 2] | 200 | 200 | 200 | 525 | 706 | 706 |
SSFYY1 | [45] | [−2, 2] | 200 | 200 | 200 | 200 | 300 | 300 |
SSFYY2 | [45] | [−100, 100] | 263 | 277 | 277 | 263 | 413 | 413 |
SK1 | [48] | [−10, 10] | 139 | 1152 | 1152 | 87 | 732 | 791 |
SK2 | [48] | [−3, 11] | 154 | 1741 | 1320 | 804 | 1989 | 1829 |
VU1 | [49] | [−3, 3] | 316 | 1108 | 1108 | 11,361 | 19,521 | 11,777 |
VU2 | [49] | [−3, 7] | 99 | 1882 | 1882 | 100 | 1900 | 1900 |
VFM1 | [50] | [−2, 2] | 195 | 195 | 195 | 195 | 290 | 290 |
VFM2 | [50] | [−4, 4] | 200 | 200 | 200 | 524 | 693 | 678 |
VFM3 | [50] | [−3, 3] | 161 | 1130 | 601 | 690 | 1002 | 981 |
Author Contributions
K.K.L. gave reasonable suggestions for this manuscript; S.K.M. gave the research direction of this paper; B.R. revised and completed this manuscript. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by the Science and Engineering Research Board (Grant No. DST-SERB- MTR-2018/000121) and the University Grants Commission (IN) (Grant No. UGC-2015-UTT-59235).
Acknowledgments
The authors are grateful to the anonymous reviewers and the editor for the valuable comments and suggestions to improve the presentation of this paper.
Conflicts of Interest
The authors declare no conflict of interest.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
A parameter-free optimization technique is applied in Quasi-Newton’s method for solving unconstrained multiobjective optimization problems. The components of the Hessian matrix are constructed using q-derivative, which is positive definite at every iteration. The step-length is computed by an Armijo-like rule which is responsible to escape the point from local minimum to global minimum at every iteration due to q-derivative. Further, the rate of convergence is proved as a superlinear in a local neighborhood of a minimum point based on q-derivative. Finally, the numerical experiments show better performance.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer