Content area

Abstract

This paper introduces a novel Picard-type iterative algorithm for solving general variational inequalities in real Hilbert spaces. The proposed algorithm enhances both the theoretical framework and practical applicability of iterative algorithms by relaxing restrictive conditions on parametric sequences, thereby expanding their scope of use. We establish convergence results, including a convergence equivalence with a previous algorithm, highlighting the theoretical relationship while demonstrating the increased flexibility and efficiency of the new approach. The paper also addresses gaps in the existing literature by offering new theoretical insights into the transformations associated with variational inequalities and the continuity of their solutions, thus paving the way for future research. The theoretical advancements are complemented by practical applications, such as the adaptation of the algorithm to convex optimization problems and its use in real-world contexts like machine learning. Numerical experiments confirm the proposed algorithm’s versatility and efficiency, showing superior performance and faster convergence compared to an existing method.

Full text

Turn on search term navigation

1. Introduction

In this paper, we adopt the standard notation for a real Hilbert space H. The inner product on H is denoted by ·,·, and the associated norm is represented by ·. Let H denote a nonempty, closed, and convex subset of H, and let T, g:HH be two nonlinear operators. The operator T:HHH is called

(i). λ-Lipschitzian if there exists a constant λ>0, such that

(1)(x,yH)TxTyλxy;

(ii). Nonexpansive if

(2)(x,yH)TxTyxy;

(iii). α-inverse strongly monotonic if there exists a constant α>0, such that

(3)(x,yH)TxTy,xyαTxTy2;

(iv). r-strongly monotonic if there exists a constant r>0, such that

(x,yH)TxTy,xyrxy2;

(iv). Relaxed (γ,r) cocoercive if there exist constants γ>0 and r>0, such that

(4)(x,yH)TxTy,xyγTxTy2+rxy2.

It is evident that the classes of α-inverse strongly monotonic and r-strongly monotonic mappings are subsets of the class of relaxed γ,r cocoercive mappings; however, the reverse implication does not hold.

Example 1.

Let H=R and H=1/3,. Clearly, H=R is a Hilbert space with norm x=x induced by the inner product x,y=x·y. Define the operator T:1/3,R by Tx=x3+10.

We demonstrate that T is relaxed γ,r cocoercive with γ=2 and r=1. Specifically, we aim to verify that for all x,y1/3,,

T x T y , x y + 2 T x T y 2 x y 2 0 .

First, note that TxTy,xy=x3+y3,xy=(xy)2x2+xy+y2, and

γ T x T y 2 r x y 2 = 2 x y 2 x 2 + x y + y 2 2 x y 2 .

Combining the terms, for all x,y[1/3,), we see that

T x T y , x y + 2 T x T y 2 x y 2 = x y 2 x 2 + x y + y 2 + 2 x y 2 x 2 + x y + y 2 2 x y 2 = ( x y ) 2 2 x 2 + x y + y 2 2 x 2 + x y + y 2 1 0 .

Indeed, putting X=x2+xy+y2, we conclude that

2 x 2 + x y + y 2 2 x 2 + x y + y 2 1 = 2 X 2 X 1 = ( 2 X + 1 ) ( X 1 ) 0 ,

because X1 for all x,y[1/3,). Thus, T is relaxed (2,1) cocoercive.

Since TxTy,xy=xy2x2+xy+y20 for all x,y1/3,, we conclude that there is no positive constant α, such that (3) holds. Thus, the operator T is not α-inverse strongly monotonic. Also, it is not an r-strongly monotonic.

The theory of variational inequalities, initially introduced by Stampacchia [1] in the context of obstacle problems in potential theory, provides a powerful framework for addressing a broad spectrum of problems in both pure and applied sciences. Stampacchia’s pioneering work revealed that the minimization of differentiable convex functions associated with such problems can be characterized by inequalities, thus establishing the foundation for variational inequality theory. The classical variational inequality problem (VIP) is commonly stated as follows:

Find uH such that

(5)(vH)Tu,vu0,

where T:HHH is a given operator. The VI (5) and its solution set are denoted by VI(H,T) and Ω(H,T)={uH:Tu,vu0,vH}, respectively.

Lions and Stampacchia [2] further expanded this theory, demonstrating its deep connections to other classical mathematical results, including the Riesz–Fréchet representation theorem and the Lax–Milgram lemma. Over time, the scope of variational inequality theory has been extended and generalized, becoming an indispensable tool for the analysis of optimization problems, equilibrium systems, and dynamic processes in a variety of fields. The historical development of variational principles, with contributions from figures such as Euler, Lagrange, Newton, and the Bernoulli brothers, highlights their profound impact on the mathematical sciences. These principles serve as the foundation for solving maximum and minimum problems across diverse disciplines such as mechanics, game theory, economics, general relativity, transportation, and machine learning. Both classical and contemporary studies emphasize the importance of variational methods in solving differential equations, modeling physical phenomena, and formulating unified theories in elementary particle physics. The remarkable versatility of variational inequalities stems from their ability to provide a generalized framework for tackling a wide range of problems, thereby advancing both theoretical insights and computational techniques.

Consequently, the theory of variational inequalities has garnered significant attention over the past three decades, with substantial efforts directed towards its development in various directions [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Building on this rich foundation, Noor [18] introduced a significant extension of variational inequalities known as the general nonlinear variational inequality (GNVI), formulated as follows:

Find uH such that

(6) ( v H and g ( v ) , g ( u ) H ) T u , g ( v ) g ( u ) 0 .

The GNVI (6) and its solution set are denoted by GNVIH,T,g and Ω(H,T,g)=uH:Tu,g(v)g(u)0,vH,g(v),g(u)H, respectively. It has been shown in Ref. [18] that problem (6) reduces to VIH,T, where gI (the identity operator). Furthermore, the GNVI problem can be reformulated as a general nonlinear complementarity problem:

Find uH such that

(7) T u , g ( u ) = 0 , g u H , T u H ,

where H={uH:u,v0for each vH} is the dual cone of a convex cone H in H. For gu=mu+H, where m is point-to-point mapping, problem (7) corresponds to the implicit (quasi-)complementarity problem.

A wide range of problems arising in various branches of pure and applied sciences have been studied within the unified framework of the GNVI problem (6) (see Refs. [1,19,20,21]). As an illustration of its application in differential equation theory, Noor [22] successfully formulated and studied the following third-order implicit obstacle boundary value problem:

Find xu(x) such that on Λ=[0,1]

u f ( x ) , u ψ ( x ) , u f ( x ) u ψ ( x ) = 0 , u ( 0 ) = 0 , u ( 0 ) , u ( 1 ) = 0 ,

where ψ(x) is an obstacle function and f(x) is a continuous function.

The projection operator technique enables the establishment of an equivalence between the variational inequality VIH,T and fixed-point problems, as follows:

Lemma 1.

Let PH:HH be a projection (which is also nonexpansive). For given zH, the condition

( v H ) u z , v u 0 ,

is equivalent to u=PH[z]. This implies that

u VI H , T u = P H u σ T u ,

where σ>0 is a constant.

Applying this lemma to the GNVI problem (6), Noor [18] derived the following equivalence result, which establishes a connection between the GNVI and fixed-point problems:

Lemma 2.

Let PH:HH be a projection (which is also nonexpansive). A function uH satisfies the GNVI problem (6) if and only if it satisfies the relation

(8) g u = P H g u σ T u ,

where σ>0 is a constant.

This equivalence has played a crucial role in the development of efficient methods for solving GNVI problems and related optimization problems. Noor [22] showed that the relation (8) can be rewritten as

u=ugu+PHguσTu,

which implies that

(9)u=Su=ugu+PHguσTu=S{ugu+PHguσTu},

where S:HH is a nonexpansive operator and F(S) denotes the set of fixed points of S.

Numerous iterative methods have been proposed for solving variational inequalities and variational inclusions [22,23,24,25,26,27,28,29,30,31]. Among these, Noor [22] introduced an iterative algorithm based on the fixed-point formulation (9) to find a common solution to both the general nonlinear variational inequality GNVI and the fixed-point problem. The algorithm is described as follows:

(10)x0(1)H,zn(1)=1cnxn(1)+cnSxn(1)gxn(1)+PHgxn(1)σTxn(1),yn(1)=1bnxn(1)+bnSzn(1)gzn(1)+PHgzn(1)σTzn(1),xn+1(1)=1anxn(1)+anSyn(1)gyn(1)+PHgyn(1)σTyn(1).

where {an}n=0, {bn}n=0, and {cn}n=0[0,1].

The convergence of this algorithm was established in [22] under the following conditions:

Theorem 1.

Let T:HH be a relaxed γ,r cocoercive and λ-Lipschitzian mapping, g:HH be a relaxed γ1,r1 cocoercive and λ1-Lipschitzian mapping, and S:HH be a nonexpansive mapping such that F(S)ΩH,T,g. Define {xn(1)}n=0 as the sequence generated by the algorithm in (10), with real sequences {an}n=0, {bn}n=0, and {cn}n=0[0,1], where n=0an=. Suppose the following conditions are satisfied

σ r γ λ 2 λ 2 < r γ λ 2 λ 2 L 2 L λ 2 , r > γ λ 2 + λ L 2 L , L < 1 ,

where

L = 2 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 .

Then, {xn(1)}n=0 converges strongly to a solution sF(S)ΩH,T,g.

Noor’s algorithm in (10) and its variants have been widely studied and applied to variational inclusions, variational inequalities, and related optimization problems. These algorithms are recognized for their efficiency and flexibility, contributing significantly to the field of variational inequalities. However, there remains considerable potential for developing more robust and broadly applicable iterative algorithms for solving GNVI problems. Motivated by the limitations of existing methods, we propose a novel Picard-type iterative algorithm designed to address general variational inequalities and nonexpansive mappings:

(11)x0H,zn=1cnxn+cnSxngxn+PHgxnσTxn,yn=1bnSxngxn+PHgxnσTxn+bnSzngzn+PHgznσTzn,xn+1=Syngyn+PHgynσTyn,

where {bn}n=0 and {cn}n=0[0,1]. Algorithm (11) cannot be directly derived from (10) because the update rule for yn differs fundamentally. In the first algorithm, yn is updated as

yn=(1bn)Sxng(xn)+PH[g(xn)σTxn]+bnSzng(zn)+PH[g(zn)σTzn],

whereas in the second algorithm, the update for yn(1) follows a direct convex combination of previous iterates:

yn(1)=(1bn)xn(1)+bnSzn(1)g(zn(1))+PH[g(zn(1))σTzn(1)].

This structural difference in the yn update step leads to different iterative behaviors, making it impossible to derive (11) directly from (10).

Building on these methodological advances, recent research has significantly deepened our understanding of variational inequalities by offering innovative frameworks and solution techniques that address real-world challenges.

The literature has seen significant advancements in variational inequality theory through seminal contributions that extend its applicability to diverse practical problems. Nagurney [32] laid the groundwork by establishing a comprehensive framework for modeling complex network interactions, which has served as a cornerstone for subsequent research in optimization and equilibrium analysis. Also, it is interesting to see a collection of papers presented in the book [33], mainly from the 3rd International Conference on Dynamics of Disasters (Kalamata, Greece, 5–9 July 2017), offering valuable strategies for optimizing resource allocation under emergency conditions. More recently, Fargetta, Maugeri, and Scrimali [34] expanded the scope of variational inequality methods by formulating a stochastic Nash equilibrium framework to analyze competitive dynamics in medical supply chains, thereby addressing challenges in healthcare logistics.

These developments underscore the dynamic evolution of variational inequality research and its capacity to address complex, real-world problems. In this context, the new Picard-type iterative algorithm proposed in our study builds upon these advances by relaxing constraints on parameter sequences, ultimately providing a more flexible and efficient approach for solving general variational inequalities.

In Section 2, we establish a strong convergence result (Theorem 2) for the proposed algorithm. Unlike Noor’s algorithm, which requires specific conditions on the parametric sequences for convergence, our algorithm eliminates this requirement while maintaining strong convergence properties. Specifically, Theorem 2 refines the convergence criteria in Theorem 1, leading to broader applicability and enhanced theoretical robustness. Furthermore, Theorem 3 demonstrates the equivalence in convergence between the algorithms in (10) and (11), highlighting their inter-relationship and the efficiency of our approach. The introduction of the Collage–Anticollage Theorem 4 within the context of variational inequalities marks a significant innovation, offering a novel perspective on transformations related to the GNVI problem discussed in (6). To the best of our knowledge, this theorem is presented for the first time in this setting. Additionally, Theorems 5 and 6 explore the continuity of solutions to variational inequalities, a topic rarely addressed in the existing literature. These contributions extend the theoretical framework established by Noor [22], offering new insights into general nonlinear variational inequalities. Beyond theoretical advancements, we validate the practical utility of the proposed algorithm by applying it to convex optimization problems and real-world scenarios. Section 3 provides a modification of the algorithm for solving convex minimization problems, supported by numerical examples. In Section 4, we demonstrate the algorithm’s applicability in real-world contexts, including machine learning tasks such as classification and regression. Comparative analysis shows that our algorithm consistently converges to optimal solutions in fewer iterations than the algorithm in (10), highlighting its superior computational efficiency and practical advantages.

The development of the main results in this paper relies on the following lemmas:

Lemma 3

([35]). Let φn(i)n=0 for i=1,2 be non-negative sequences of real numbers satisfying

(N)φn+1(1)μφn(1)+φn(2),

where μ[0,1) and limnφn(2)=0. Then, limnφn(1)=0.

Lemma 4

([36]). Let φn(i)n=0 for i=1,2,3 be non-negative real sequences satisfying the following inequality

(N)φn+1(1)1φn(3)φn(1)+φn(2),

where φn(3)[0,1] for all n0, n=1φn(3)=, and φn(2)=oφn(3). Then, limnφn(1)=0.

2. Main Results

Theorem 2.

Let T:HH be a relaxed γ,r cocoercive and λ-Lipschitz operator, g:HH be a relaxed γ1,r1 cocoercive and λ1-Lipschitz operator, and S:HH be a nonexpansive mapping such that F(S)Ω(H,T,g). Let {xn}n=0 be an iterative sequence defined by the algorithm in (11) with real sequences {bn}n=0, {cn}n=0[0,1]. Assume the following conditions hold

(12) σ r γ λ 2 λ 2 < r γ λ 2 2 4 λ 2 L 1 L λ 2 , 2 λ L 1 L < γ r λ 2 < 1 λ , L < 1 2 ,

where

(13) L = 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 .

Then, the sequence {xn}n=0 converges strongly to sF(S)Ω(H,T,g) with the following estimate for each nN,

x n + 1 s ( 2 L + δ ) 2 ( n + 1 ) k = 0 n 1 b k c k 1 ( 2 L + δ ) x 0 s ,

where

(14) δ = 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 .

Proof. 

Let sH be a solution to F(S)ΩH,T,g. Then,

(15)s=Ssgs+PHgsσTs=1bnSsgs+PHgsσTs+bnS{sgs+PHgsσTs}=1cns+cnSsgs+PHgsσTs.

Using (11), (15), and the assumptions that PH and S are nonexpansive operators, we obtain

(16)xn+1s=Syngyn+PHgynσTynSsgs+PHgsσTsynsgyngs+gyngsσTynTs2ynsgyngs+ynsσTynTs.

Since T is a relaxed (γ,r) cocoercive and λ-Lipschitzian operator,

ynsσTynTs2=yns22σyns,TynTs+σ2TynTs2yns2+2σγTynTs22σryns2+σ2TynTs2δ2yns2

or equivalently

(17)ynsσTynTsδyns,

where δ is defined by (14).

Since g is a relaxed γ1,r1 cocoercive and λ1-Lipschitzian operator,

(18)ynsgyngsLyns,

where L is defined by (13).

Combining (16), (17), and (18), we have

(19)xn+1s(2L+δ)yns,

and from (12) and (14), we know that 2L+δ<1 (see Appendix A).

It follows from (11), (15), and the nonexpansivity of the operators S and PH that

(20)yns1bnxngxn+PHgxnσTxnsgs+PHgsσTs+bnzngzn+PHgznσTznsgs+PHgsσTs1bnxnsgxngs+1bngxngsσTxnTs+bnznsgzngs+bngzngsσTznTs21bnxnsgxngs+1bnxnsσTxnTs+2bnznsgzngs+bnznsσTznTs.

Using the same arguments above gives us the following estimates

(21)xnsσTxnTsδxns,xnsgxngsLxns,znsσTznTsδzns,znsgzngsLzns,znsdnxns,

where dn=1cn12L+δ.

Combining (19)–(21), we obtain

(22)xn+1s(2L+δ)21bncn[1(2L+δ)]xns.

As bn, cn[0,1] for all nN and 2L+δ<1, we have 1bncn12L+δ<1 for all nN. Using this fact in (22), we obtain

xn+1s(2L+δ)2xns,

which implies

xn+1s(2L+δ)2(n+1)x0s.

Taking the limit as n0, we conclude that limnxns=0. □

Theorem 3.

Let H, H, T, g, S, L, and δ be defined as in Theorem 2, and let the iterative sequences xn(1)n=0 and xnn=0 be generated by (10) and (11), respectively. Assume the conditions in (12) hold, and ann=0, bnn=0, and cnn=0[0,1]. Then, the following assertions are true:

(i) If xn(1)n=0 converges strongly to sF(S)Ω(H,T,g), then xn(1)xnn=0 also converges strongly to 0. Moreover, the estimate holds for all nN,

xn+1(1)xn+1(2L+δ)2xn(1)xn+1+(1+2L+δ)21anbnxn(1)s,

Furthermore, the sequence xnn=0 converges strongly to sF(S)Ω(H,T,g).

(ii) If the sequence 1anbnan(1(2L+δ))n=0 is bounded and n=0an=, then the sequence xnxn(1)n=0 converges strongly to 0. Additionally, the estimate holds for all nN

xn+1xn+1(1)1an1(2L+δ)xnxn(1)+1+2L+δ21anbnxns.

Moreover, the sequence xn(1)n=0 converges strongly to sF(S)Ω(H,T,g).

Proof. 

(i) Suppose that xn(1)n=0 converges strongly to sF(S)Ω(H,T,g). We aim to show that xn(1)xnn=0 converges strongly to 0. Using (1), (2), (4), (10), (11), and (15), we deduce the following inequalities

xn+1(1)xn+11anxn(1)s+1anSsgs+PHgsσTsSyngyn+PHgynσTyn+anSyn(1)gyn(1)+PHgyn(1)σTyn(1)Syngyn+PHgynσTyn1anxn(1)s+21anynsgyngs+1anynsσTynTs+2anyn(1)yngyn(1)gyn+anyn(1)ynσTyn(1)Tyn1anxn(1)s+1an2L+δyn(1)s+(2L+δ)yn(1)yn,yn(1)yn1bnxn(1)s+21bnxnsgxngs+1bnxnsσTxnTs+2bnzn(1)zngzn(1)gzn+bnzn(1)znσTzn(1)Tzn1bn1+2L+δxn(1)s+1bn2L+δxn(1)xn+bn2L+δzn(1)zn,

as well as

yn(1)s1bnxn(1)s+bn2L+δzn(1)s,zn(1)s1cn12L+δxn(1)s,zn(1)zn1cn12L+δxn(1)xn.

Combining these inequalities, we get

(23)xn+1(1)xn+12L+δ21bncn12L+δxn(1)xn+1anxn(1)s+(2L+δ)1bn1an+1+2L+δxn(1)s+1anbn(2L+δ)21cn1(2L+δ)xn(1)s.

Since {an}n=0, {bn}n=0, {cn}n=0[0,1] and 2L+δ<1, for all nN, we have

(24)(2L+δ)2<1,1bncn1(2L+δ)<1,1cn1(2L+δ)<1,(1an)bn<1anbn,1an<1anbn,1bn<1anbn,1an<1.

By applying the inequalities in (24) to (23), we derive the following result

(25)xn+1(1)xn+1(2L+δ)2xn(1)xn+1+(1+2L+δ)21anbnxn(1)s.

Define φn(1):=xn(1)xn, φn(2):=1+1+2L+δ21anbnxn(1)s, and μ:=2L+δ2[0,1), for all nN. Given the assumption limnxn(1)s=0, it follows that limnφn(2)=0. It is straightforward to verify that (25) satisfies the conditions of Lemma 3. By applying the conclusion of Lemma 3, we obtain limnxn(1)xn=0. Furthermore, we note the following inequality for all nN,

xnsxn(1)xn+xn(1)s.

Taking the limit as n, we conclude that limnxns=0, since

limnxn(1)s=limnxn(1)xn=0.

(ii) Let us assume that the sequence 1anbnan1(2L+δ)n=0 is bounded and n=0an=. By Theorem 2, it follows that limnxn=s. We now demonstrate that the sequence xn(1)n=0 converges strongly to s. Utilizing results from (1), (2), (4), (10), (11), and (15), we derive the following inequalities:

(26)xn+1xn+1(1)1anxnxn(1)+1anxns+1an(2L+δ)yns+an(2L+δ)ynyn(1),

(27)ynyn(1)1bnxnxn(1)+1bn1+2L+δxns+bn(2L+δ)znzn(1),

(28)znzn(1)1cn1(2L+δ)xnxn(1).

From the proof of Theorem 2, we know that

(29)yns(2L+δ)1bncn1(2L+δ)xns.

Combining (26)–(29), we obtain

(30)xn+1xn+1(1)1an+an(2L+δ)1bn+anbn(2L+δ)21cn1(2L+δ)xnxn(1)+1an+1an(2L+δ)21bncn1(2L+δ)+an(2L+δ)1bn1+2L+δxns.

Since an, bn, cn[0,1] and 2L+δ(0,1), for all nN, we have

(31)(2L+δ)2<2L+δ,1bncn1(2L+δ)<1,anbn<an,1cn1(2L+δ)<1.

Applying the inequalities in (31) to (30) gives

(32)xn+1xn+1(1)1an1(2L+δ)xnxn(1)+(1+2L+δ)21anbnxns.

Now, we define the sequences φn(1):=xnxn(1),

φn(2):=(1+2L+δ)21anbnxns,φn(3):=an(1(2L+δ))(0,1),qn:=(1+2L+δ)2(1anbn)an(1(2L+δ)),

for all nN. Note that n=0an=.

Since the sequence {qn}n=0 is bounded, there exists K>0 such that |qn|<K for all nN. For any ε>0, since θn:=xns converges to 0 and ε/K>0, there exists n0N such that θn<ε/K for all nn0. Consequently, qnθn<ε for all nn0, which implies limnφn(2)/φn(3)=0, i.e., φn(2)=oφn(3). Thus, inequality (32) satisfies the requirements of Lemma 4, and by its conclusion, we deduce that limnxnxn(1)=0. Since limnxns=0 and

xn(1)sxnxn(1)+xns,

it follows that limnxn(1)s=0. □

After establishing the strong convergence properties of our proposed algorithm in Theorems 2 and 3, we now present additional results that further illustrate the robustness and practical applicability of our approach. In the following theorems, we first quantify the error estimate between an arbitrary point and the solution via the operator Φ, and then we explore the relationship between Φ and its approximation Φ˜.

Specifically, Theorem 4 provides rigorous bounds linking the error Φ(x)x to the distance between any point xH and a solution x, thereby offering insights into the stability of the method. Building on this result, Theorem 5 establishes an upper bound on the distance between the fixed point of the exact operator Φ and that of its approximation Φ˜. Finally, Theorem 6 delivers a direct error bound in terms of a prescribed tolerance ε, which is particularly useful for practical implementations.

Theorem 4.

Let H, H, T, g, S, L, and δ be as defined in Theorem 2, and suppose the conditions in (12) are satisfied. Then, for any solution xF(S)Ω(H,T,g) and for any xH, the following inequalities hold:

(33) 1 1 + ( 2 L + δ ) Φ ( x ) x x x 1 1 2 L + δ Φ ( x ) x ,

where the operator Φ:HH is defined as Φx=Sxg(x)+PHg(x)σTx.

Proof. 

From equation (9), we know that Φx=x. If x=x, inequality (33) is trivially satisfied. On the other hand, if xx for all xH, we have

(34)xx=ΦxxΦxΦ(x)+Φ(x)xxxgxgx+PHgxσTxPHg(x)σTx+Φ(x)xxxgxgx+gxgxσTxTx+Φ(x)x2xxgxgx+xxσTxTx+Φ(x)x,

as well as

(35)xxgxg(x)1+2γ1λ122r1+λ12xx,

(36)xxσTxTx1+2σγλ22σr+σ2λ2xx.

Inserting (35) and (36) into (34), we obtain

xx21+2γ1λ122r1+λ12xx+1+2σγλ22σr+σ2λ2xx+Φ(x)x,

or equivalently

xxΦxx121+2γ1λ122r1+λ12+1+2σγλ22σr+σ2λ2.

On the other hand, we have

Φ(x)x=Φxx+xxΦ(x)Φx+xx=Sxg(x)+PHg(x)σTxSxgx+PHgxσTx+xx.

By employing similar arguments as in (34)–(36), we deduce

Φ(x)x21+2γ1λ122r1+λ12+1+2σγλ22σr+σ2λ2+1xx,

or equivalently

Φ(x)x1+21+2γ1λ122r1+λ12+1+2σγλ22σr+σ2λ2xx.

Combining the bounds derived from (34)–(36), we finally arrive at

11+2L+δΦ(x)xxx11(2L+δ)Φ(x)x,

which completes the proof. □

Transitioning from error estimates for the exact operator, Theorem 5 shifts the focus to the interplay between the original operator Φ and its approximation Φ˜. This theorem establishes an upper bound for the distance between their respective fixed points, thus providing a measure of how closely the approximation tracks the behavior of the exact operator. The theorem is stated as follows:

Theorem 5.

Let T, g, S, Φ, L, and δ be as defined in Theorem 4. Assume that Φ˜:HH is a map with a fixed point x˜H. Further, suppose the conditions in (12) are satisfied. Then, for a solution xF(S)Ω(H,T,g), the following holds

(37)xx˜112L+δsupxHΦ(x)Φ˜(x).

Proof. 

By (9), we know that Φx=x. If x=x˜, then inequality (37) is directly satisfied. If xx˜, then using the same arguments as in the proof of Theorem 4, we obtain

(38)xx˜=ΦxΦ˜x˜ΦxΦx˜+Φx˜Φ˜x˜Sxgx+PHgxσTxSx˜gx˜+PHgx˜σTx˜+supxHΦ(x)Φ˜x2xx˜gxgx˜+xx˜σTxTx˜+supxHΦ(x)Φ˜(x),

as well as

(39)xx˜gxgx˜1+2γ1λ122r1+λ12xx˜,

(40)xx˜σTxTx˜1+2σγλ22σr+σ2λ2xx˜.

Combining (38)–(40), we derive

xx˜21+2γ1λ122r1+λ12+1+2σγλ22σr+σ2λ2xx˜+supxHΦ(x)Φ˜(x).

Simplifying further, this yields

xx˜11(2L+δ)supxHΦ(x)Φ˜(x),

which completes the proof. □

Finally, Theorem 6 extends this analysis by providing a direct error bound in terms of a prescribed tolerance ε. This result is particularly valuable for practical implementations, as it offers a clear metric for the performance of the approximating operator in approximating the fixed point of Φ. The theorem is formulated as follows:

Theorem 6.

Let T, g, S, Φ, Φ˜, L, and δ be as defined in Theorem 5. Let Φ˜:HH be a map with a fixed point x˜H. Suppose the conditions stated in (12) hold. Additionally, assume that

(41)supxHΦ(x)Φ˜xε,

for some fixed ε>0. Then, for a fixed point x˜H, such that Φ˜x˜=x˜, the following inequality holds

x˜Φx˜1+2L+δ1(2L+δ)ε.

Proof. 

Let Φx=x. From (38)–(40), we have

ΦxΦx˜(2L+δ)xx˜.

Then, using this inequality, as well as (37) and (41), we obtain

x˜Φx˜x˜x+xΦx˜=x˜x+ΦxΦx˜1+2L+δ1(2L+δ)ε,

which had to be proven. □

3. An Application to the Convex Minimization Problem

Let H be a Hilbert space, H be a closed and convex subset of H, and f:HR be a convex function. The problem of finding the minimums of f is referred to as the convex minimization problem, which is formulated as follows

(42)minxHf(x).

Denote the set of solutions to the minimization problem (42) by . The minimization problem (42) can equivalently be expressed as a fixed-point problem:

A point x is a solution to the minimization problem if and only if PH(Iσf)x=x, where PH is the metric projection, f denotes the gradient of the Fréchet differentiable function f, and σ>0 is a constant.

Moreover, the minimization problem (42) can also be reformulated as a variational inequality problem:

A point x is a solution to the minimization problem if and only if x satisfies the variational inequality fx,vx0 for all vH.

Now, let S:HH be a nonexpansive operator, and let F(S) represent the set of fixed points of S. If xF(S)Ω(H,f,I)=F(S), then for any σ>0, the following holds:

PHxσfx=x=Sx=SPHxσfx,

since x is both the solution to the problem (42) and a fixed point of S.

Based on these observations, if we set g=I (the identity operator) and T=f in the iterative algorithm (11), we derive the following algorithm, which converges to a point that is both a solution to the minimization problem (42) and a fixed point of S:

(43)x0H,xn+1=S[PHynσfyn],yn=1bnS[PHxnσfxn]+bnS[PHznσfzn],zn=1cnxn+cnS[PHxnσfxn]},

where {bn}n=0, {cn}n=0[0,1].

Theorem 7.

Let S, L, and δ be defined as in Theorem 2 and FS. Let f:HR be a convex mapping such that its gradient f is a relaxed γ,r cocoercive and λ-Lipschitz mapping from H to H. Assume that F(S)Ω(H,f,I). Define the iterative sequence {xn}n=0 by the algorithm in (43) with real sequences {bn}n=0, {cn}n=0[0,1]. In addition to the condition (12) in Theorem 2, assume the following condition is satisfied

(44) r 1 γ 1 1 .

Then, the sequence {xn}n=0 converges strongly to sF(S), and the following estimate holds

( n N ) x n + 1 s δ 2 ( n + 1 ) k = 0 n 1 b k c k ( 1 δ ) x 0 s .

Proof. 

Set g=I and T=f in Theorem 2. The mapping g is 1-Lipschitzian and a relaxed γ1,r1 cocoercive for every γ1,r1>0, satisfying the condition (44). Consequently, by Theorem 2, it follows that xF(S)Ω(H,f,I)=F(S). □

Example 2.

Let

H = x = x k k = 0 : k = 0 x k 2 1 / 2 < and x k R for all k N 0

denote a real Hilbert space equipped with the norm x=x,x1/2, where x,y=k=0xkyk for x,yH. Additionally, the set H=x=xkn=0:x1 is a closed and convex subset of H.

Now, we consider a function f:HR defined by f(x)=x22+x2, where x2={xk2}k=0. The solution to the minimization problem (42) is the zero vector, 0=0k=0 for f.

From [37] (Theorem 2.4.1, p. 167), the Fréchet derivative of f at a point x is fx=4x3+2x=4xk3k=0+2xkk=0, which is unique. For xk,yk[1,1], we have

4xk34yk3+2xkykxkyk17844xk2+xkyk+yk2xkyk2+2xkyk2,

from which we deduce

fxfy,xy=n=04xk34yk3+2xkykxkyk1784TxTy2+2xy2.

This means that f is a relaxed 1/784,2 cocoercive operator. Additionally, since

fxfy2=k=04xk34yk3+2xkyk2142xy2,

fx is a 14-Lipschitz function.

Let S:HH be defined by Sx=sinx=sinxkk=0. The operator S is nonexpansive since

S x S y 2 = k = 0 2 sin x k y k 2 2 cos x k + y k 2 2 k = 0 x k y k 2 = x y 2 .

Moreover, F(S)=0k=0. Based on assumptions (12) and (44), we set σ=1/392, γ1=1/2, and r1=1.4999995. Consequently, we calculate δ=0.99616612 and L=0.001, which yields 2L+δ=0.99816612<1. Also, we have F(S)=0k=0. It is evident that these parameter choices satisfy conditions (12) and (44).

Next, let an=bn=cn=1/(n+1) for all n. To ensure clarity, we denote a sequence of elements in the Hilbert space H as {xn}n=0, where xn=(xn,0,xn,1,xn,2,)H. Under these notations, the iterative algorithms defined in (43) and (11) are reformulated as follows:

(45) x 0 = x 0 , k k = 0 H , x n + 1 , k = sin P H 390 y n , k 4 ( y n , k ) 3 392 , y n , k = 1 1 n + 1 sin P H 390 x n , k 4 ( x n , k ) 3 392 + 1 n + 1 sin P H 390 z n , k 4 ( z n , k ) 3 392 , z n , k = 1 1 n + 1 x n , k + 1 n + 1 sin P H 390 x n , k 4 ( x n , k ) 3 392

and

(46) x 0 = x 0 , k k = 0 H , x n + 1 , k = 1 1 n + 1 x n , k + 1 n + 1 sin P H 390 y n , k 4 ( y n , k ) 3 392 , y n , k = 1 1 n + 1 x n , k + 1 n + 1 sin P H 390 z n , k 4 ( z n , k ) 3 392 , z n , k = 1 1 n + 1 x n , k + 1 n + 1 sin P H 390 x n , k 4 ( x n , k ) 3 392 ,

where PH:HH is defined by PHx=x, when xH, and PHx=x/x, when xH.

Let the initial point for both iterative processes be the sequence x0=1/10k+1n=0H. From Table 1 and Table 2, as well as from Figure 1, it is evident that both algorithms (45) and (46) converge strongly to the point 0=(0,0,0,). Furthermore, the algorithm in (45) exhibits faster convergence compared to the algorithm in (46).

As a prototype, consider the mapping Φ, Φ˜:HH, be defined as

Φ(x)=sinPH390xk4(xk)3392

and Φ˜(x)=Φ˜xnn=0=xn3n=0=x03,x13,x23,x33,. With these definitions, the results of Theorems 4, 5, and 6 can be straightforwardly verified. All computations in this example were performed using Wolfram Mathematica 14.2.

4. Numerical Experiments

In this section, we adapt and apply the iterative algorithm (11) within the context of machine learning to demonstrate the practical significance of the theoretical results derived in this study. By doing so, we highlight the real-world applicability of the proposed methods beyond their theoretical foundations. Furthermore, we compare the performance of algorithm (11) with algorithm (10), providing additional support for the validity of the theorems presented in previous sections.

Our focus is on the framework of loss minimization in machine learning, employing two novel projected gradient algorithms to solve related optimization problems. Specifically, we consider a regression/classification setup characterized by a dataset consisting of m samples and d attributes, represented as XRm×d, with corresponding outcomes (labels) YRm. The optimization problem is formulated as follows:

minF(w)=minwRd12XwY22.

Using the 1-projection operator PH (onto the positive quadrant), S=I, wRd, g=I, and T=F, we define two iterative algorithms:

(47)w0H,wn+1=PHunσFunun=1bnPHwnσFwn+bnPHvnσFvn,vn=1cnwn+cnPHwnσFwn

and

(48)w0H,wn+1=1anwn+anPHunσFun,un=1bnwn+bnPHvnσFvn,vn=1cnwn+cnPHwnσFwn,

where {an}n=0, {bn}n=0, and {cn}n=0[0,1]. To compute the optimal value of the step size σ, a backtracking algorithm is employed. All numerical implementations and simulations were carried out using Matlab, Ver. R2023b.

The real-world datasets used in this study are:

+. Aligned Dataset (in Swarm Behavior): Swarm behavior refers to the collective dynamics observed in groups of entities such as birds, insects (e.g., ants), fish, or animals moving cohesively in large masses. These entities exhibit synchronized motion at the same speed and direction while avoiding mutual interference. The Aligned dataset comprises pre-classified data relevant to swarm behavior, including 24,017 instances with 2400 attributes (see https://archive.ics.uci.edu/ml/index.php (accessed on 14 February 2025)).

+. COVID-19 Dataset: COVID-19, an ongoing viral epidemic, primarily causes mild to moderate respiratory infections but can lead to severe complications, particularly in elderly individuals and those with underlying conditions such as cardiovascular disease, diabetes, chronic respiratory illnesses, and cancer. The dataset is a digitized collection of patient records detailing symptoms, medical history, and risk classifications. It is designed to facilitate predictive modeling for patient risk assessment, resource allocation, and medical device planning. This dataset includes 1,048,576 instances with 21 attributes (see https://datos.gob.mx/busca/dataset/informacion-referente-a-casos-covid-19-en-mexico (accessed on 14 February 2025)).

+. Predict Diabetes Dataset: Provided by the National Institute of Diabetes and Digestive and Kidney Diseases (USA), this dataset contains diagnostic metrics for determining the presence of diabetes. It consists of 768 instances with 9 attributes, enabling the development of predictive models for diabetes diagnosis (see https://www.kaggle.com/datasets (accessed on 14 February 2025)).

+. Sobar Dataset: The Sobar dataset focuses on factors related to cervical cancer prevention and management. It includes both personal and social determinants, such as perception, motivation, empowerment, social support, norms, attitudes, and behaviors. The dataset comprises 72 instances with 20 attributes (see https://archive.ics.uci.edu/ml/index.php (accessed on 14 February 2025)).

The methodology for dataset analysis and model evaluation was carried out as follows:

All datasets were split into training (60%) and testing (40%) subsets. During the analysis, we set the tolerance value (i.e., the difference between two successive function values) to 105 and capped the maximum number of iterations at 105. To evaluate the performance of algorithms on these datasets, we recorded the following metrics:

Function values F(wn);

The norm of the difference between the optimal function value and the function values at each iteration, i.e., F(wn)F(w);

Computation times (in seconds);

Prediction and test accuracies, measured using root mean square error (rMSE).

The results and observations are as follows:

Function Values: In Figure 2, the function values F(wn) for the evaluated algorithms are presented.

Convergence Analysis: Figure 3 demonstrates the convergence performance of the algorithms in terms of F(wn)F(w).

Prediction Accuracy: Figure 4 show cases the prediction accuracy (rMSE) achieved by the algorithms during the testing phase.

The results, as illustrated in Figure 2, Figure 3 and Figure 4 and summarized in Table 3, clearly indicate that algorithm (47) outperforms algorithm (48) in terms of efficiency and accuracy.

Table 3 clearly demonstrates that algorithm (47) yields significantly better results than algorithm (48) across various datasets. In terms of the number of iterations, (47) converges in far fewer steps (for example, 135 versus 2559 for the Aligned dataset and 116 versus 10,480 for the Diabetes dataset) and achieves the same or lower minimum F values, resulting in superior outcomes. Moreover, (47) shows slightly lower training errors (rMse) and, in most cases—with the exception of COVID-19, where (48) achieves a marginally better test error—comparable or improved test errors (rMse2). Most importantly, the training time for (47) is significantly shorter across all datasets (for instance, 6.28 s versus 118.14 s for the Aligned dataset and 0.022 s versus 2.83 s for the Diabetes dataset), which confers an advantage in computational efficiency. Overall, these results demonstrate that algorithm (47) not only converges faster but also delivers better performance in terms of both accuracy and computational cost compared to algorithm (48).

5. Conclusions

This study presents the development of a novel Picard-S hybrid iterative algorithm designed to address general variational inequalities and nonexpansive mappings within real Hilbert spaces. By relaxing the stringent constraints traditionally imposed on parametric sequences, the proposed algorithm achieves enhanced flexibility and broader applicability while retaining its strong convergence properties. This advancement not only bridges gaps in the existing theoretical framework but also establishes a robust equivalence between the new method and a previously established algorithm, demonstrating its consistency and efficacy. One of the key contributions of this work is the integration of the Collage–Anticollage Theorem, which provides an innovative perspective on transformations associated with general nonlinear variational inequalities (GNVI). This theorem, explored for the first time in this context, enriches the theoretical toolkit for analyzing and solving variational inequalities. The study also delves into the continuity properties of solutions to variational inequalities, addressing a rarely discussed yet crucial aspect of these problems, thereby offering a more holistic approach to their resolution. Numerical experiments conducted as part of this research validate the proposed algorithm’s superior performance. In comparison to an existing algorithm, the new algorithm consistently converges to optimal solutions with fewer iterations, underscoring its computational efficiency and practical advantages. Applications in areas such as convex optimization and machine learning further highlight its versatility. For example, the algorithm has shown promise in solving real-world problems related to classification, regression, and large-scale optimization tasks, solidifying its relevance in both theoretical and applied domains.

Author Contributions

Conceptualization, M.E., F.G. and G.V.M.; data curation, E.H. and M.E.; methodology, M.E., F.G. and G.V.M.; formal analysis, E.H., M.E., F.G. and G.V.M.; investigation, E.H., M.E., F.G. and G.V.M.; resources, E.H., M.E., F.G. and G.V.M.; writing—original draft preparation, M.E. and F.G.; writing—review and editing, E.H., M.E., F.G. and G.V.M.; visualization, E.H., M.E., F.G. and G.V.M.; supervision, F.G., M.E. and G.V.M.; project administration, G.V.M.; funding acquisition, E.H., M.E. and F.G. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declares no conflicts of interest.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures and Tables

Figure 1 Graph in loglog scale denotes the convergence behaviors of algorithms (45) (blue line) and (46) (red line) to 0n=0.

View Image -

Figure 2 Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on reduction in function values F(wn) in each step.

View Image -

Figure 3 Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on F(wn)F(w) in each step.

View Image -

Figure 4 Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on rMSE in each step.

View Image -

Convergence behavior of algorithm (45).

n x n = x n , k k = 0 x n
0 { 10 1 , 10 2 , 10 3 , 10 4 , } 1.0050378 × 10 1
1 { 9.7967792 × 10 2 , 9.8472060 × 10 3 , 9.8477132 × 10 4 , 9.8477183 × 10 5 , } 9.8466417 × 10 2
10 { 8.6656450 × 10 2 , 8.9532943 × 10 3 , 8.9563123 × 10 4 , 8.9563425 × 10 5 , } 8.7122397 × 10 2
100 { 3.1258552 × 10 2 , 3.5598150 × 10 3 , 3.5651069 × 10 4 , 3.5651600 × 10 5 , } 3.1462641 × 10 2
500 { 5.1356941 × 10 4 , 5.9449351 × 10 5 , 5.9550580 × 10 6 , 5.9551595 × 10 7 , } 5.1703345 × 10 4
1000 { 3.0841463 × 10 6 , 3.5701370 × 10 7 , 3.5762164 × 10 8 , 3.5762773 × 10 9 , } 3.1049491 × 10 6
2000 { 1.1122794 × 10 10 , 1.2875491 × 10 11 , 1.2897416 × 10 12 , 1.2897635 × 10 13 , } 1.1197818 × 10 10
{ 0 } k = 0 0

Convergence behavior of algorithm (46).

n x n = x n , k k = 0 x n
0 { 10 1 , 10 2 , 10 3 , 10 4 , } 1.0050378 × 10 1
1 { 9.79677792 × 10 2 , 9.8472060 × 10 3 , 9.8477132 × 10 4 , 9.8477183 × 10 5 , } 9.8466417 × 10 2
10 { 9.6217387 × 10 2 , 9.7133063 × 10 3 , 9.7142348 × 10 4 , 9.7142441 × 10 5 , } 9.6711360 × 10 2
100 { 9.4718081 × 10 2 , 9.5972797 × 10 3 , 9.5985596 × 10 4 , 9.5985724 × 10 5 , } 9.5207948 × 10 2
500 { 9.3707517 × 10 2 , 9.5183569 × 10 3 , 9.5198684 × 10 4 , 9.5198835 × 10 5 , } 9.4194550 × 10 3
1000 { 9.3277892 × 10 2 , 9.4846274 × 10 3 , 9.4862361 × 10 4 , 9.4862522 × 10 5 , } 9.3763705 × 10 2
2000 { 9.2851285 × 10 2 , 9.4510301 × 10 3 , 9.4527346 × 10 4 , 9.4527516 × 10 5 , } 9.3335876 × 10 2
{ 0 } k = 0 0

Comparison of the efficiency of algorithms (47) and (48).

Aligned Diabetes
Algorithm (47) Algorithm (48) Algorithm (47) Algorithm (48)
# of iterations 135 2559 116 10,480
Min F value 633.5152581 633.5407101 46.31812832 47.514786
rMse (Train.) 0.355915454 0.355922962 0.345908859 0.3504142
rMse2 (Test) 0.255097007 0.254559006 0.272919936 0.2745554
Train. time (s) 6.282856 118.1402858 0.0217341 2.8294408
COVID-19 Sobar
Algorithm (47) Algorithm (48) Algorithm (47) Algorithm (48)
# of iterations 64,173 100,000 542 2817
Min F value 449.786 490.524 4.2837252 4.668452
rMse (Train.) 0.2994 0.3132 0.3428808 0.358791
rMse2 (Test) 0.16943 0.15862 0.288059 0.295836
Train. time (s) 187.705 420.37 0.4419386 1.013948

Appendix A

Let L and δ be given by (13) and (14), respectively (see Theorem 2), and let q=r/λ2γ. Then, the conditions (12) and (14) give|σq|<q24λ2L(1L),2λL(1L)<|q|<1λ,L<12, andδ2=12σrγλ2+σ2λ2=12σλ2q+σ2λ2=1+λ2(σq)2q2, respectively. The conditions (A2) ensure that |σq| and δ are well defined.

Using (A1), the last expression for δ2 givesδ2<14L(1L)=(12L)2, i.e., 2L+δ<1 under condition L<1/2.

References

1. Stampacchia, G. Formes bilinearies coercivities sur les ensembles convexes. C. R. Acad. Sci. Paris; 1964; 258, pp. 4413-4416.

2. Lions, J.; Stampacchia, G. Variational inequalities. Commun. Pure Appl. Math.; 1967; 20, pp. 493-519. [DOI: https://dx.doi.org/10.1002/cpa.3160200302]

3. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic Press: New York, NY, USA, 1980.

4. Glowinski, R.; Lions, J.L.; Trémolières, R. Numerical Analysis of Variational Inequalities; North-Holland: Amsterdam, The Netherlands, 1981.

5. Giannessi, F.; Maugeri, A. (Eds.) Variational Inequalities and Network Equilibrium Problems; Springer: New York, NY, USA, 1995.

6. Atalan, Y.; Hacıoğlu, E.; Ertürk, M.; Gürsoy, F.; Milovanović, G.V. Novel algorithms based on forward-backward splitting technique: Effective methods for regression and classification. J. Glob. Optim.; 2024; 90, pp. 869-890. [DOI: https://dx.doi.org/10.1007/s10898-024-01425-w]

7. Gürsoy, F.; Hacıoğlu, E.; Karakaya, V.; Milovanović, G.V.; Uddin, I. Variational inequality problem involving multivalued nonexpansive mapping in CAT(0) Spaces. Results Math.; 2022; 77, 131. [DOI: https://dx.doi.org/10.1007/s00025-022-01663-y]

8. Keten Çopur, A.; Hacıoğlu, E.; Gürsoy, F.; Ertürk, M. An efficient inertial type iterative algorithm to approximate the solutions of quasi variational inequalities in real Hilbert spaces. J. Sci. Comput.; 2021; 89, 50. [DOI: https://dx.doi.org/10.1007/s10915-021-01657-y]

9. Gürsoy, F.; Ertürk, M.; Abbas, M. A Picard-type iterative algorithm for general variational inequalities and nonexpansive mappings. Numer. Algorithms; 2020; 83, pp. 867-883. [DOI: https://dx.doi.org/10.1007/s11075-019-00706-w]

10. Atalan, Y. On a new fixed point iterative algorithm for general variational inequalities. J. Nonlinear Convex Anal.; 2019; 20, pp. 2371-2386.

11. Maldar, S. Iterative algorithms of generalized nonexpansive mappings and monotone operators with application to convex minimization problem. Symmetry; 2022; 14, pp. 1841-1868. [DOI: https://dx.doi.org/10.1007/s12190-021-01593-y]

12. Maldar, S. New parallel fixed point algorithms and their application to a system of variational inequalities. J. Appl. Math. Comput.; 2022; 68, 1025. [DOI: https://dx.doi.org/10.3390/sym14051025]

13. Konnov, I.V. Combined relaxation methods for variational inequalities. Lecture Notes in Mathematical Economics; Springer: Berlin/Heidelberg, Germany, 2000.

14. Facchinei, F.; Pang, J.-S. Finite Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, Berlin/Heidelberg, Germany, 2003; Volumes I and II.

15. Giannessi, F.; Maugeri, A. (Eds.) Variational Analysis and Applications; Springer: New York, NY, USA, 2005.

16. Ansari, Q.H. (Ed.) Topics in Nonlinear Analysis and Optimization; World Education: Delhi, India, 2012.

17. Ansari, Q.H.; Lalitha, C.S.; Mehta, M. Generalized Convexity. Nonsmooth Variational Inequalities and Nonsmooth Optimization; CRC Press: Boca Raton, FL, USA, London, UK, New York, NY, USA, 2014.

18. Noor, M.A. General variational inequalities. Appl. Math. Lett.; 1988; 1, pp. 119-122. [DOI: https://dx.doi.org/10.1016/0893-9659(88)90054-7]

19. Noor, M.A. Variational inequalities in physical oceanography. Ocean Waves Engineering, Advances in Fluid Mechanics; Rahman, M. WIT Press: Southampton, UK, 1994; Volume 2.

20. Bnouhachem, A.; Liu, Z.B. Alternating direction method for maximum entropy subject to simple constraint sets. J. Math. Anal. Appl.; 2004; 121, pp. 259-277. [DOI: https://dx.doi.org/10.1023/B:JOTA.0000037405.55660.a4]

21. Kocvara, M.; Outrata, J.V. On implicit complementarity problems with application in mechanics. Proceedings of the the IFIP Conference on Numerical Analysis and Optimization; Rabat, Marocco, 15–17 December 1993.

22. Noor, M.A. General variational inequalities and nonexpansive mappings. J. Math. Anal. Appl.; 2007; 331, pp. 810-822. [DOI: https://dx.doi.org/10.1016/j.jmaa.2006.09.039]

23. Ahmad, R.; Ansari, Q.H.; Irfan, S.S. Generalized variational inclusions and generalized resolvent equations in Banach spaces. Comput. Math. Appl.; 2005; 29, pp. 1825-1835. [DOI: https://dx.doi.org/10.1016/j.camwa.2004.10.044]

24. Ahmad, R.; Ansari, Q.H. Generalized variational inclusions and H-resolvent equations with H-accretive operators. Taiwan. J. Math.; 2007; 111, pp. 703-716. [DOI: https://dx.doi.org/10.11650/twjm/1500404753]

25. Ahmad, R.; Ansari, Q.H. An iterative algorithm for generalized nonlinear variational inclusions. Appl. Math. Lett.; 2000; 13, pp. 23-26. [DOI: https://dx.doi.org/10.1016/S0893-9659(00)00028-8]

26. Fang, Y.P.; Huang, N.J. H-Monotone operator and resolvent operator technique for variational inclusions. Appl. Math. Comput.; 2003; 145, pp. 795-803. [DOI: https://dx.doi.org/10.1016/S0096-3003(03)00275-3]

27. Huang, N.J.; Fang, Y.P. A new class of general variational inclusions involving maximal η-monotone mappings. Publ. Math. Debrecen; 2003; 62, pp. 83-98. [DOI: https://dx.doi.org/10.5486/PMD.2003.2629]

28. Huang, Z.; Noor, M.A. Equivalency of convergence between one-step iteration algorithm and two-step iteration algorithm of variational inclusions for H-monotone mappings. Computers Math. Appl.; 2007; 53, pp. 1567-1571. [DOI: https://dx.doi.org/10.1016/j.camwa.2006.08.044]

29. Noor, M.A.; Huang, Z. Some resolvent iterative methods for variational inclusions and nonexpansive mappings. Appl. Math. Comput.; 2007; 194, pp. 267-275. [DOI: https://dx.doi.org/10.1016/j.amc.2007.04.037]

30. Zeng, L.C.; Guu, S.M.; Yao, J.C. Characterization of H-monotone operators with applications to variational inclusions. Comput. Math. Appl.; 2005; 50, pp. 329-337. [DOI: https://dx.doi.org/10.1016/j.camwa.2005.06.001]

31. Gürsoy, F.; Sahu, D.R.; Ansari, Q.H. S-iteration process for variational inclusions and its rate of convergence. J. Nonlinear Convex Anal.; 2016; 17, pp. 1753-1767.

32. Nagurney, A. Network Economics: A Variational Inequality Approach; Springer: Berlin/Heidelberg, Germany, 1999.

33. Kotsireas, I.S.; Nagurney, A.; Pardalos, P.M. Dynamics of Disasters–Algorithmic Approaches and Applications; Springer Optimization and Its Applications 140 Springer: Berlin/Heidelberg, Germany, 2018.

34. Fargetta, G.; Maugeri, A.; Scrimali, L. A stochastic Nash equilibrium problem for medical supply competition. J. Optim. Theory Appl.; 2022; 193, pp. 354-380. [DOI: https://dx.doi.org/10.1007/s10957-022-02025-y]

35. Qihou, L. A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings. J. Math. Anal. Appl.; 1990; 146, pp. 301-305. [DOI: https://dx.doi.org/10.1016/0022-247X(90)90303-W]

36. Weng, X. Fixed point iteration for local strictly pseudocontractive mapping. Proc. Amer. Math. Soc.; 1991; 113, pp. 727-731. [DOI: https://dx.doi.org/10.1090/S0002-9939-1991-1086345-8]

37. Milovanović, G.V. Numerical Analysis and Approximation Theory—Introduction to Numerical Processes and Solving of Equations; Zavod za udžbenike: Beograd, Serbia, 2014; (In Serbian)

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.