Content area

Abstract

Herein, we present two hybrid inertial self-adaptive iterative methods for determining the combined solution of the split variational inclusions and fixed-point problems. Our methods include viscosity approximation, fixed-point iteration, and inertial extrapolation in the initial step of each iteration. We employ two self-adaptive step sizes to compute the iterative sequence, which do not require the pre-calculated norm of a bounded linear operator. We prove strong convergence theorems to approximate the common solution of the split variational inclusions and fixed-point problems. Further, we implement our methods and results to examine split variational inequality and split common fixed-point problems. Finally, we illustrate our methods and compare them with some known methods existing in the literature.

Full text

Turn on search term navigation

1. Introduction

The fixed-point theory is the coherent and logical framework for indispensable nonlinear interdisciplinary problems including differential equations, control theory, game theory, variational inequality, equilibrium problems, optimization problems, split feasibility problems, etc. Over the last few years, fixed-point theory has become an active research area, which has led to the designing and development of efficient, effective, flexible, and easily implementable approximation methods for approximating the solutions of nonlinear and inverse problems. The fixed-point problem (FPP) of a self-mapping Z:UU is defined by

(1)findvUsothatZ(v)=v.

Numerous methods have been used to address fixed-point problems. Among them, the majority of methods used to approximate the fixed points are motivated by Mann’s iterative method [1]. In order to obtain a fast convergence rate, Moudafi [2] introduced the viscosity approximation technique by blending Z with a contraction mapping.

The first split problem, namely, the split feasibility problem, was initially presented by Censor and Elfving [3]. The most recent inverse problem is the split inverse problem studied by Censor et al. [4]. Because of their relevance to mathematical models of real-life problems appearing in cancer therapy [3,5], image restoration [6], computerized tomography, and data compression [7,8], several inverse problems and methods for solving them have been developed and studied in the last few years. Maudafi [9] explored the split monotone variational inclusion problems (SplitMVIP) in the framework of Hilbert spaces. Byrne et al. [10] introduced the split common null-point problem (SplitCNPP). A special case of (SplitCNPP) is the split variational inclusion problem (SplitVIP), which is defined by

findvUsothat0M(v)andu=BvUsolves0N(u),

where M:U2U, N:U2U are monotone operators, B:UU is a bounded linear operator, and U is a Hilbert space. By using the fact that the zero of the monotone operator M is the fixed point of resolvent of M, that is, 0M(v)v=JλM(v), Byrne et al. [10] suggested the following method for (SplitVIP):

(2)vn+1=JμM[vnηB*(IJμN)Bvn],

where B* is the adjoint of B, μ>0, η(0,2/Q), and Q=B*B. Based on this iterative method (2), numerous iterative methods have been developed and studied to solve (SplitVIP). Kazmi and Rizvi [11] extended method (2) to investigate the common solution of (SplitVIP) and (FPP) as follows:

(3)un=JμM[vn+ηB*(JμNI)Bvn],vn+1=ζnF(vn)+(1ζn)Sun,

where F is a contraction; η(0,1Q), ζn(0,1) is a real sequence such that limnζn=0, n=1ζn=, and n=1|ζnζn1|=. Akram et al. [12] modified the method (3) in the following manner to study the same problem:

(4)un=vnη[(IJμ1M)vn+B(I(Jμ2N)Bvn],vn+1=ζnF(vn)+(1ζn)Sun,

where μ1>0,μ2>0 and η=11+B2. Some other iterative methods for solving (SplitVIP) and (FPP) can also be seen in [13,14,15,16] and references therein.

The common disadvantage of these methods is the calculation of the step size, which depends on the calculation of B*B and the calculation of B*B is a challenging task. To address this challenge, researchers developed iterative methods that eliminate the estimation of B*B. Lopez et al. [17] investigated split feasibility problems without knowing the norm of the matrix. Dilshad et al. [18] studied the split common null-point problem without a pre-existing estimation of the operator norm as follows:

(5)un=vnJμ1Mvn+B*(I(Jμ2N)Bvn,vn+1=ζnu+(1ζn)(vnηnun),

for some fixed uU and

(6)ηn=vnJμ1Mvn2+(IJμ2N)Bvn2vnJμ1Mvn+B*(IJμ2N)Bvn2.

In this direction, several research papers have caught the attention of researchers: see [19,20,21,22] and references therein.

To obtain the fast convergence of iterative algorithms, Alvarez and Attouch [23] introduced a new algorithm for estimating the solution of variational inclusions. This algorithm was named as an inertial proximal point algorithm. It is observed that the sequence derived from the inertial proximal point method converges rapidly because of its design. As a result, numerous researchers have applied the inertial term since it plays a crucial role in accelerating the convergence; see [24,25,26,27,28] and references therein.

In continuation to the above study, our aim is to present two hybrid inertial self-adaptive iterative methods to estimate the common solution of (SplitVIP) and (FPP), which can be summarized as follows:

Our motive is to introduce fast and traditionally different viscosity methods to estimate the common solution of (SplitVIP) and (FPP). Unlike method (3) and method (4) [or method (5)], our hybrid algorithms compute the viscosity approximation and fixed-point iteration [or Halpern-type iteration] in the initial step of each iteration.

To accelerate the convergence, we also add the inertial term in the initial step of the iteration. Therefore, in the first step, we compute the inertial extrapolation, fixed-point iteration, and viscosity approximation all at the same time.

In method (3) and method (4), the pre-calculated norm of B is essential, which is a tedious task. However, we are using two self-adaptive step-sizes, which do not require the pre-calculated norms of a bounded linear operator B.

Our methods are efficient and an accelerated version of method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18], which is demonstrated by numerical examples.

2. Preliminaries

All through the text, we symbolize the real Hilbert space by U and by D, which is the close and convex subset of U. We denote the strong and weak convergence of the sequence {vn} to v by vnv and vnv, respectively.

For all η1,η2,η3 in U, t1,t2,t3[0,1] such that t1+t2+t3=1, the following equality and inequality hold:

(7)t1η1+t2η2+t3η32=t1η12+t2η22+t3η12  t1t2η1η22t2t3η2η32t3t1η3η22

and

(8)η1±η22=η12±2η1,η2+η22η12±2η2,η1+η2.

Definition 1.

A mapping Z:UU is said to be (1) 

Averaged if there exists a nonexpansive mapping f:UU and α(0,1) such that Z=(1α)I+αf;

(2) 

Lipschitz continuous if there exists θ>0, such that Z(ϱ1)Z(ϱ2)θϱ1ϱ2,ϱ1, ϱ2U;

(3) 

Contraction if Z(ϱ1)Z(ϱ2)θϱ1ϱ2,ϱ1,ϱ2U, for some θ(0,1);

(4) 

Nonexpansive if Z(ϱ1)Z(ϱ2)ϱ1ϱ2,ϱ1,ϱ2U;

(5) 

Firmly nonexpansive if Z(ϱ1)Z(ϱ2)2ϱ1ϱ2,Z(ϱ1)Z(ϱ2),ϱ1,ϱ2U;

(6) 

κ-inverse strongly monotone (κ-ism) if there exists κ>0 such that

Z ( ϱ 1 ) Z ( ϱ 2 ) , ϱ 1 ϱ 2 κ Z ( ϱ 1 ) Z ( ϱ 2 ) 2 , ϱ 1 , ϱ 2 U ;

(7) 

Monotone if

Z ( ϱ 1 ) Z ( ϱ 2 ) , ϱ 1 ϱ 2 0 , ϱ 1 , ϱ 2 U .

Definition 2

([29]). Let N:U2U be a set-valued mapping. Then, (1) 

N is called monotone if ϱ1ϱ2,η1η20,ϱ1,ϱ2,U,η1M(ϱ1),η2N(ϱ2);

(2) 

Graph(N)={(ϱ1,η1)U×U:η1N(ϱ1)};

(3) 

N is said to be maximal monotone if N is monotone and (I+μN)(U)=U, for μ>0, where I is an identity mapping on U;

(4) 

The resolvent of N is defined by JμN=[I+μN]1, where I is an identity mapping and μ>0.

Remark 1.(1) 

It can be easily seen that a κ-inverse strongly monotone mapping is also monotone and 1κ-Lipschitz continuous.

(2) 

Every averaged mapping is nonexpansive, but the converse need not be true in general.

(3) 

The operator Z is firmly nonexpansive if and only if IZ is firmly nonexpansive.

(4) 

The composition of two averaged operators is also averaged.

Remark 2.(1) 

The resolvent JμN of the maximal monotone mapping M is single-valued, nonexpansive, as well as firmly nonexpansive for any μ>0.

(2) 

The resolvent JμN is firmly nonexpansive if and only if

JμNϱ1JμNϱ22ϱ1ϱ22(IJμN)ϱ1(IJμN)ϱ22,forallϱ1,ϱ2U.

(3) 

The operator IJμN is nonexpansive and so it is demiclosed at zero.

(4) 

If N:U2U is monotone, then JμN and IJμN are firmly nonexpansive for μ>0, and JμN is the resolvent of N.

Lemma 1

([30]). Let D be a closed and convex subset of Hilbert space U, and Z:DD is nonexpansive mapping such that (1) 

Fix(Z),

(2) 

The sequence {vn}v and limnZ(vn)vn=0.

Then, Z(v)=v.

Lemma 2

([31]). If {vn} is a sequence of non-negative real numbers satisfying

vn+1(1ψn)vn+ψnφn,n0,

where {ψn} is a sequence in (0,1) and {φn} is a sequence of real numbers such that (1) 

limnψn=0,andn=1ψn=,

(2) 

limsupnφn0.

Then, limnvn=0.

Lemma 3

([32]). Suppose D is a closed and convex subset of U. If the sequence {vn}U satisfies the following: (1) 

limnvnv exists for all vD,

(2) 

Any weak cluster point of {vn} belongs to D;

then, there exists vD such that vnv.

Lemma 4

([33]). Let {vn} be a real sequence that does not decrease at infinity in the sense that there exists a subsequence {vnk} of {vn} such that vnk<vnk+1,k0. Also consider the sequence of integers {ε(n)}nn0 defined by

ε(n)=max{kn:vkvk+1}.

Then, {ε(n)}nn0 is a nondecreasing sequence verifying limnε(n)= and nn0,

max{vε(n),v(n)}vε(n)+1.

3. Main Results

The solution sets of (SplitVIP) and (FPP) are denoted by Δ1 and Δ2, respectively. To establish the convergence of the suggested methods, we make the following assumptions:(X1)

F:UU is θ-contraction;

(X2)

M,N:U2U are monotone operators and Z:UU is a nonexpansive mapping;

(X3)

{ζn} is a sequence in (0,1) so that limnζn=0, and n=1ζn=;

(X4)

{ϕn} is a positive and bounded sequence such that limnϕnζn=0;

(X5)

The common solution set of (SplitVIP) and (FPP) is expressed by Δ1Δ2 and Δ1Δ2.

Now, we are in the position to design our hybrid Algorithm 1. The hybrid Algorithm 1 is constructed in such a way that the initial step iterates the inertial extrapolation term τn(vnvn1) combined with the viscosity approximation. We implement our hybrid Algorithm 1 to estimate the common solution of (SplitVIP) and (FPP).

Algorithm 1. Hybrid Algorithm 1
Choose λ>0,μ>0, τ[0,1), and  0<ςn<2. Select initial points v0 and v1 and fix n=0.
Iterative Step: For  n1 iterate vn, vn1 and select  0<τnτ¯n, where 

(9)τ¯n=minϕnvnvn1,τ,ifvnvn1,τ,otherwise.

 Compute 

(10)sn=ζnF(vn)+(1ζn)Z(vn)+τn(vnvn1),

(11)un=snλn(IJλM)(sn),

(12)vn+1=unμnB*(IJμN)(Bun),

  where λn and μn are defined by 

(13)λn=ςn(IJλM)sn(IJλM)sn+B*(IJμN)Bsn,if(IJλM)sn+B*(IJμN)Bsn0,0,otherwise

 and 

(14)μn=ςn(IJμN)Bun(IJλM)un+B*(IJμN)Bun,if(IJλM)un+B*(IJμN)Bun0,0,otherwise.

  If  vn+1=un=vn=sn, then stop; otherwise, fix  n=n+1 and go back to the computation.
Remark 3.

Let un=sn in Algorithm 1; then, if (IJλM)sn+B*(IJμN)Bsn0, we obtain from (11) that λn(IJλM)(sn)=0, that is, ςn(IJλM)sn2(IJλM)sn+B*(IJμN)Bsn=0, which implies that ςn(IJλM)sn=0, which concludes that 0M(sn). If vn+1=un and (IJλM)un+B*(IJμN)Bun0, we obtain from (12) that μnB*(IJλN)(Bun)=0. Since B is a bounded linear operator, we obtain (IJλN)(Bun)=0, that is, 0N(Bun). If (IJλM)sn+B*(IJμN)Bsn=0, then there is nothing to show.

Remark 4.

From (9) and Assumption (X4), we have limnτnvnvn1ζnϕnζn=0. Therefore, there exists a constant L1 such that τnvnvn1ζnL1 or τnvnvn1L1ζn.

Next, we utilize our hybrid Algorithm 1 to establish the strong convergence theorem, which approximates the common solution of (SplitVIP) and (FPP). In the presentation of the strong convergence theorem, the implemented method computes two step sizes, which makes us to calculate the norm of the bounded linear operator B.

Theorem 1.

If assumptions (X1)(X5) hold, then the sequence {vn} induced by Algorithm 1 converges strongly to v, where v=PΔ1Δ2F(v).

Proof. 

Let lΔ1Δ2. By using (8), (11) and Remark 2 (4), we have

(15) unl2=snλn(IJλM)(sn)l2snl2+λn2(IJλM)(sn)22λn(IJλM)(sn),snl=snl2+λn2(IJλM)(sn)22λn(IJλM)(sn)(IJλM)(l),snlsnl2+λn2(IJλM)(sn)22λn(IJλM)(sn)(IJλM)(l)2=snl2+(λn22λn)(IJλM)(sn)2.

Now, using (13), we estimate that

 (λn22λn)(IJλM)(sn)2=(IJλM)(sn)2[ςn2(IJλM)(sn)2(IJλM)(sn)+B*(IJμN)(Bsn)2 2ςn(IJλM)(sn)(IJλM)(sn)+B*(IJμN)(Bsn)]

(16) =(IJλM)(sn)3ςn2(IJλM)(sn)2ςn((IJλM)(sn)+B*(IJμN)(Bsn))(IJλM)(sn)+B*(IJμN)(Bsn)2 (IJλM)(vn)3(ςn22ςn)((IJλM)(sn)+B*(IJμN)(Bsn))(IJλM)(sn)+B*(IJμN)(Bsn)2 =(ςn22ςn)(IJλM)(sn)3(IJλM)(sn)+B*(IJμN)(Bsn).

From (15) and (16), we obtain

(17)unl2snl2+(ςn22ςn)(IJλM)(sn)3(IJλM)(sn)+B*(IJμN)(Bsn).

Since ςn(0,2), we obtain

(18)unlsnl.

Applying the same steps as in the calculation of (16) and (17), we can easily obtain the following:

(19)vn+1l2unl2+(μ22μn)(IJμN)(Bun)2.

By using (14), we can obtain

(20)(μn22μn)(IJμN)(Bun)2=(ςn22ςn)(IJμN)(Bun)3(IJλM)(un)+B*(IJμN)(Bun).

It follows from (19) and (20) that

(21)vn+1l2unl2+(ςn22ςn)(IJμN)(Bun)3(IJλM)(un)+B*(IJμN)(Bun),

or

(22)vn+1lunl.

Combining (17) and (21), we obtain

(23)vn+1l2snl2+ςn(ςn2)(IJλM)(sn)3(IJλM)(sn)+B*(IJμN)(Bsn)  +ςn(ςn2)(IJμN)(Bun)3(IJλM)(un)+B*(IJμN)(Bun).

Since ςn(0,2), we conclude that

(24)vn+1lsnl.

Since F is θ-contraction, using (10) and Remark 4, we have

snl=ζnF(vn)+(1ζn)Z(vn)+τn(vnvn1)l =ζnF(vn)l+(1ζn)Z(vn)l+τnvnvn1 ζnF(vn)F(l)+ζnF(l)l+(1ζn)Z(vn)l+τnvnvn1 ζnθvnl+ζnF(l)l+(1ζn)vnl+ζnL1 =[1ζn(1θ)]vnl+ζn(1θ)F(l)l+L11θ maxvnl,F(l)l+L11θ.

Taking advantage of (24) and by mathematical induction, we achieve that the sequence {sn} is bounded, and so are {vn} and {un}. Let rn=ζnF(vn)+(1ζn)Z(vn), which is also bounded. By using (8), we obtain

(25)  rnl2 =ζnF(vn)+(1ζn)Z(vn)l2 =ζn2F(vn)l2+(1ζn)2Z(vn)l2+2ζn(1ζn)F(vn)l,Z(vn)l =ζn2F(vn)l2+(1ζn)2vnl2+2ζnF(vn)F(l),Z(vn)l  +2ζnF(l)l,Z(vn)l2ζn2F(vn)l,Z(vn)l ζn2F(vn)l2+(1ζn)2vnl2+2ζnθvnlvnl  +2ζnF(l)l,Z(vn)l+2ζn2F(vn)lvnl =12ζn(1θ)vnl2+ζn[ζn(F(vn)l+vnl)2  +2F(l)l,Z(vn)l].

We also estimate

(26)  rnl,τn(vnvn1) =ζnF(vn)+(1ζn)Z(vn)l,τn(vnvn1) =ζnτnF(vn)l,(vnvn1)+(1ζn)τnZ(vn)l,(vnvn1) ζnτnF(vn)lvnvn1+(1ζn)τnZ(vn)lvnvn1 =τnvnvn1ζnF(vn)l+(1ζn)vnl ϕnF(vn)l+vnl,

since τnvnvn1ϕn. By using the above estimated values in (25) and (26), we obtain

(27)snl2=rn+τn(vnvn1)l2 =rnl2+2τnvnvn1,rnl+τn2vnvn12 1ζn(1θ)vnl2+2ϕnF(vn)l+vnl  +ζnζn(F(vn)l+vnl)2+2F(l)l,Z(vn)l+ϕn2 =(1en)vnl2+endn,

where en=ζn(1θ) and

dn=2ϕnζnF(vn)l+vnl+ζnF(vn)l+vnl2+2F(l)l,Z(vn)l+ϕn2ζn(1θ).

Combining (23) and (27), we obtain

(28)vn+1l2(1en)vnl2+endn+ςn(ςn2)(IJλM)(sn)3(IJλM)(sn)+B*(IJμN)(Bsn)  +ςn(ςn2)(IJμN)(Bun)3(IJλM)(un)+B*(IJμN)(Bun).

The remaining proof can be split in two possible cases:

Case I: If {vnl} is not monotonically increasing, then there exists a number N1 such that vn+1lvnl for all nN1. Hence, the boundedness of {vnl} implies that {vnl} is convergent. Therefore, using (28), we have

  ςn(ςn2)(IJλM)(sn)3(IJλM)(sn)+B*(IJμN)(Bsn)+ςn(ςn2)(IJμN)(Bun)3(IJλM)(un)+B*(IJμN)(Bun) vnl2vn+1l2envnl2+endn

Since ςn(0,2) and en0 as n, by taking limit n, we obtain

limn(IJλM)(sn)3(IJλM)(sn)+B*(IJμN)(Bsn)=0,

andlimn(IJμN)(Bun)3(IJλM)(un)2+B*(IJμN)(Bun)=0,

which implies that

(29)limn(IJλM)(sn)=limn(IJμN)(Bun)=0.

From (11) and (13), we infer that

(30)limnunsn=0.

Using (12) and (14), we obtain

(31)limnvn+1un=0.

Taking together (30) and (31), we have

(32)vn+1snvn+1un+unsn0asn.

It is not difficult to obtain that

(33)vn+1vn0asn.

By using snvn=rn+τn(vnvn1)vn, and using (32), (33), and Remark 4, we immediately see that

(34)rnvnτnvnvn1+vn+1vn+vn+1sn0asn.

Hence, we can obtain

(35)snvnτnvnvn1+rnvn0asn.

From (10), we can easily write that

(36)snZ(vn)=ζn(F(vn)l)+ζn(Z(l)Z(vn))+τn(vnvn1).

Using the boundedness of {vn}, Condition (X3), the nonexpansive property of Z, and Remark 4, we achieve

(37)snZ(vn)0asn.

Similarly, we can show that

(38)unZ(vn)0,vnZ(vn)0asn.

Since {vn} is bounded, which implies the existence of subsequence {vnk} converging weakly to l¯, the subsequences {snk}, {unk} of {sn} and {un}, respectively, also converge weakly to l¯. It follows from (29) and (38) that

(39)limk(IJλM)(snk)=0,limk(IJμN)(Bunk)=0andlimkvnkZ(vnk)=0.

Keeping in mind (30), (31), and (39), we infer that l¯ΔnΔ2.

Finally, we prove that the sequence {vn} strongly converges. From (28), we have

(40)vn+1l2(1en)vnl2+endn.

Furthermore,

  limsupndn  =limsupn2ϕnςnF(vn)l+vnl+ςnF(vn)l+vnl2+2F(l)l,Z(vn)l+ϕn2ζn(1θ)  =limsupk2ϕnkςnkF(vnk)l+vnkl+ζnkF(vnk)l+vnkl2+2F(l)l,Z(vnk)l+ϕnk2ζnk(1θ)  =limsupk2F(l)l,Z(vnk)vnk+2F(l)l,vnkl  =0+F(l)l,l¯l  0.

Now we are in position to apply Lemma 2 in (40) and conclude that {vn} converges strongly to l. Hence, the result is proved.

Case II: If {vnl} is monotonically increasing, then the sequence ϵ:N:N for all nn0 defined by ϵ(n)=max{kN:nk:vklvk+1l} is increasing such that ϵ(n) as n and

(41)0vϵ(n)lvϵ(n)+1l,nn0.

By using (28), we have

  ςϵ(n)(ςϵ(n)2)(IJλM)(sϵ(n))3(IJλM)(sϵ(n))+B*(IJμN)(Bsϵ(n))+ςn(ςϵ(n)2)(IJμN)(Buϵ(n))3(IJλM)(uϵ(n))2+B*(IJμN)(Buϵ(n)) vϵ(n)l2vϵ(n)+1l2+eϵ(n)dϵ(n)eϵ(n)vϵ(n)l2+eϵ(n)dϵ(n)

By passing the limit n, we obtain

(IJλM)(sϵ(n))0,(IJμN)(Buϵ(n))0.

Using the same techniques as in the proof of Case I, we obtain vϵ(n)+1sϵ(n)0, vϵ(n)+1uϵ(n)0, vϵ(n)+1vϵ(n)0, and Z(vϵ(n))vϵ(n)0 as n. From (40) and (41), we obtain

0vϵ(n)l2dϵ(n),

Thus, limsupnvϵ(n)l0. By passing limit n and using Lemma 4,

0vnlmax{vnl,vϵ(n)l}vϵ(n)+1l.

It follows that vnl0, that is, vnl as n. This completes the proof. □

Further, we construct the hybrid Algorithm 2, which is a slightly modified version of hybrid Algorithm 1. In hybrid Algorithm 2, the initial step iterates the viscosity approximation, which is the convex combination of F(vn) and Z(vn)+τn(vnvn1), where the inertial extrapolation term τn(vnvn1) is added to accelerate the convergence.

Algorithm 2. Hybrid Algorithm 2
Choose λ>0,μ>0, τ[0,1), and 0<ςn<2. Select initial points v0 and v1 and fix n=0.
Iterative Step: For n1, iterate vn,  vn1 and select 0<τnτ¯n, where 

(42)τ¯n=minϕnvnvn1,τ,ifvnvn1,τ,otherwise.

Compute 

(43)sn=ζnF(vn)+(1ζn)[Z(vn)+τn(vnvn1)],

(44)un=snλn(IJλM)(sn),

(45)vn+1=unμnB*(IJμM)(Bun),

where μn and λn are defined by

(46)λn=ςn(IJλM)sn(IJλM)sn+B*(IJμN)Bsn,if(IJλM)sn+B*(IJμN)Bsn0,0,otherwise

  and 

(47)μn=ςn(IJμN)Bun(IJλM)un+B*(IJμN)Bun,if(IJλM)un+B*(IJμN)Bun0,0,otherwise.

If vn+1=un=vn=sn, then stop; otherwise, fix n=n+1 and go back to the computation.

The following is the convergence analysis of hybrid Algorithm 2, which is similar to that of the proof of Theorem 1.

Theorem 2.

If assumptions (X1)(X5) hold, then the sequence {vn} generated by Algorithm 2 converges strongly to v, where v=PΔ1Δ2F(v).

Proof. 

Take lΔ1Δ2; then, from (43) and using Remark 4, we see that

snl=ζnF(vn)+(1ζn)[Z(vn)+τn(vnvn1)]l =ζnF(vn)l+(1ζn)[Z(vn)l+τnvnvn1] ζnF(vn)F(l)+ζnF(l)l+(1ζn)Z(vn)l+τnvnvn1 ζnθvnl+ζnF(l)l+(1ζn)vnl+ζnL1 =[1ζn(1θ)]vnl+ζn(1θ)F(l)l+L11θ maxvnl,F(l)l+L11θ.

Keeping in mind (24) and using mathematical induction, we obtain that the sequence {sn} is bounded, and so is {vn} and {un}. Denote yn=Z(vn)+τn(vnvn1); then, by using (8) and denoting tn=τnvnvn1ϕn, we establish that

(48)ynl2=Z(vn)+τn(vnvn1)l Z(vn)l2+2τnvnvn1,ynl vnl2+2tnynl,

and

(49)F(vn)l,ynl=F(vn)F(l),ynl+F(l)l,ynl F(vn)F(l)ynl+F(l)l,ynl θvnlynl+F(l)l,ynl =12θ2vnl2+ynl2+F(l)l,ynl.

Using (48) and (49), we obtain

(50)  snl =ζnF(vn)+(1ζn)ynl =ζ2F(vn)l2+(1ζn)2ynl2+2ζn(1ζn)F(vn)l,ynl ζn2F(vn)l2+ζn(1ζn){θ2vnl2+ynl2  +2F(l)l,ynl}+(1ζn)2yn=l2 ζn2F(vn)l2+ζnθ2vnl2+[(1ζn)2+ζn(1ζn)]ynl2  +2ζn(1ζnF(l)l,ynl ζn2F(vn)l2+ζnθ2vnl2+(1ζn)[vnl2+2tnynl]  +2ζn(1ζnF(l)l,ynl [1ζn(1θ2)]vnl2+ζn{2(1ζn)F(l)l,ynl  +ζnF(vn)l2+2tnζnynl}.

Combining (23) and (50), we obtain

vn+1l2(1cn)vnl2+cngn+ςn(ςn2)(IJλM)(sn)3(IJλM)(sn)+B*(IJμN)(Bsn)  +ςn(ςn2)(IJμN)(Bun)3(IJλM)(un)+B*(IJμN)(Bun),

where cn=ζn(1θ2) and gn=2(1ζn)F(l)l,ynl+ζnf(vn)l2+2ϕnζnynl(1θ2).

Considering Case I of Theorem 1, we can easily obtain

(51)limn(IJλM)(sn)=limn(IJμN)(Bun)=0,

and

(52)limnunsn=0,limnvn+1un=0,

and, hence,

(53)limnvn+1sn=0,limnvn+1vn=0.

Next, we show that Zvnvn0 as n. As yn=Z(vn)+τn(vnvn1), then

limnynZ(vn)=0=0,

and

snZ(vn)=ζnF(vn)(1ζn)ynZ(vn) =ζn(F(vn)l)+ζn(lyn)+(ynZ(vn)),

The assumption on ζn and the boundedness of vn imply that

snZ(vn)ζnF(vn)l+ζnlyn+ynZ(vn)asn.

Also,

(54)Z(vn)unZ(vn)sn+unsnasn.

Together with (52)–(54), we obtain by taking n

(55)Z(vn)vnZ(vn)un+unsn+vn+1sn+vn+1vn0.

The boundedness of {vn}, {un}, and {sn} imply the existence of subsequence {vnk}, {unk}, and {snk}, which converge to some point v^; and from (51)–(53) and (55), we conclude that v^Δ1Δ2. The remaining proof can be obtained easily by using similar steps as in the proof of Theorem 1. □

Let qU be arbitrary. Then, by replacing F(v) with q in hybrid Algorithm 1 and hybrid Algorithm 2, we define the following Halpern-type iterative methods, which can be seen as the particular cases of our hybrid methods:

Corollary 1.

If assumptions (X2)(X5) hold good, then the sequence {vn} induced by Algorithm 3 converges strongly to q=PΔ1Δ2(q).

Algorithm 3. A Particular Case of Hybrid Algorithm 1
Choose λ>0,μ>0, τ[0,1), and 0<ςn<2. Select initial points v0 and v1, any qU, and fix n=0.
Iterative Step: For n1, iterate vn, vn1, and select 0<τnτ¯n, where

τ¯n=minϕnvnvn1,τ,ifvnvn1,τ,otherwise.

Compute 

sn=ζnq+(1ζn)Z(vn)+τn(vnvn1),un=snλn(IJλM)(sn),vn+1=unμnB*(IJμN)(Bun),

whereμn and λn are defined by

λn=ςn(IJλM)sn(IJλM)sn+B*(IJμN)Bsn,if(IJλM)sn+B*(IJμN)Bsn0,0,otherwise

and

μn=ςn(IJμN)Bun(IJλM)un+B*(IJμN)Bun,if(IJλM)un+B*(IJμN)Bun0,0,otherwise.

If vn+1=un=vn=sn, then stop; otherwise, fix n=n+1 and go back to the computation.

Proof. 

Replacing F(v)=q in Algorithm 1 as well as in the proof of Theorem 1, we obtain the required result. □

Corollary 2.

If assumptions (X2)(X5) hold good, then the sequence {vn} induced by Algorithm 4 converges strongly to q=PΔ1Δ2(q).

Algorithm 4. A Particular Case of Hybrid Algorithm 2
Let λ>0,μ>0, τ[0,1), and 0<ςn<2 be given. Select initial points v0 and v1, any qU, and fix n=0.
Iterative Step: For n1, iterate vn, vn1, and select 0<τnτ¯n, where

τ¯n=minϕnvnvn1,τ,ifvnvn1,τ,otherwise.

Compute

sn=ζnq+(1ζn)[Z(vn)+τn(vnvn1)],un=snλn(IJλM)(sn),vn+1=unμnB*(IJμN)(Bun),

where μn and λn are defined by

λn=ςn(IJλM)sn(IJλM)sn+B*(IJμN)Bsn,if(IJλM)sn+B*(IJμN)Bsn0,0,otherwise

  and 

μn=ςn(IJμN)Bun(IJλM)un+B*(IJμN)Bun,if(IJλM)un+B*(IJμN)Bun0,0,otherwise.

If vn+1=un=vn=sn, then stop; otherwise, fix n=n+1 and go back to the computation.

Proof. 

By replacing q in place of F(v) in Algorithm 2 as well as in the proof of Theorem 2, we obtain the desired proof. □

4. Some Advantages

Some applications of the suggested methods for solving split variational inequality and split common fixed-point problems are discussed below.

4.1. Split Variational Inequality Problem

Let D be a nonempty, closed, and convex subset of U and PD be the projection on D, G1:UU, and G2:UU, which are monotone operators, and B:UU is a bounded linear operator, then the split variational inequality problem (SplitVItP) is defined by

findvUsothatG1(v),vv,0,vD,andu=BlUsolvesG2(u),vu,0,uD.

Then, by replacing JλM=JμN=PD in Algorithms 1–4, we can obtain the hybrid algorithms and their convergence results for (SplitVItP) and (FPP).

4.2. Split Common Fixed-Point Problem

Let T1:UU and T2:UU be self-nonexpansive mappings and B:UU be a bounded linear operator; then, the split common fixed-point problem (SplitCFPP) is defined as follows:

findvUsothatvFix(T1),andu=BvUsolvesuFix(T2).

Then, by replacing JλM=T1, JμN=T2, and Z=I, identity mapping in Algorithms 1–4, we can obtain the hybrid algorithms and their convergence results for (SplitCFPP).

Next, we present numerical examples in finite and infinite dimensional Hilbert spaces, showing the efficiency of our hybrid methods and their comparison with the work studied in [10,11,12,18].

5. Numerical Examples

Example 1

(Finite dimensional). Let U=R4, equipped with the inner product t,q=q1t1+q2t2+q3t3+q4t4 for t=(t1,t2,t3,t4) and q=(q1,q2,q3,q4) and the norm q2=|q1|2+|q2|2+|q3|2+|q4|2. The operators M, N, and B are defined by

M(q1,q2,q3,q4)=q1,q22,q33,q44,N(q1,q2,q3,q4)=q1,q23,q3,q43,B(q1,q2,q3,q4)=q1,q2,q3,q4,

such that M is 14-inverse strongly monotone and N is 13-inverse strongly monotone (hence monotone); B is a bounded and linear operator. The nonexpansive mapping Z is defined by Z(q1,q2,q3,q4)=q1,0,q3,0, and F(q1,q2,q3,q4)=q12,q22,q32,q42 is a θ-contraction with θ=12.

To run our algorithms, we select ϕn=1n2+100, ςn=21n+1, and τ=0.75, and τn is selected randomly from (0,τn¯), where

τ ¯ n = min 1 ( n 2 + 100 ) v n v n 1 , 0.75 , if v n v n 1 , 0.75 , otherwise

We compare our algorithms and method (2), method (3), method (4), and method (5) by using the following common parameters: λ=μ=μ1=μ2=0.25, ζn=1n+100 for all the methods; η=0.5 for method (2), method (3), and method (4); u=(0,0,0,1) and ηn given by (6) are used in method (5). The stopping condition is vn+1vn<1020, and we consider the two cases of initial values:

Case (a): v0=(130,10,54,95),v1=(14,30,105,100)

Case (b): v0=(0,100,100,0),v1=(1/2,0,1/100,5)

It can be seen that our algorithms are efficient and effective and can be implemented easily without calculating B. The convergence of {vn} to {0}=Δ1Δ2 is shown in Figure 1 and Figure 2 using different initial values. It is found that our algorithms approach the solution in fewer numbers of steps in comparison to the method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].

Example 2. (Infinite dimensional) Let U=l2:=t:=(t1,t2,t3,,tn,),tnR:n=1|tn|2<, the space of all square summable sequences with inner product t,q=n=1tnqn, and the norm is t=n=1|tn|21/2. The mappings M, N, and S are defined by

M(t):=t5N(t):=t3Z(t):=t,F(t):=t2,B(t):=t,tl2.

Clearly, M and N are monotone, Z is nonexpansive, F is a contraction, and B is a bounded linear operator. We choose ςn=22n+10, ϕn=1n2+1, and τ=0.85, and τn is selected randomly from (0,τn¯), where

τ¯n=min1(n2+100)vnvn1,0.85,ifvnvn1,0.85,otherwise.

The common parameters for our algorithms and method (2), method (3), method (4), and method (5) are as follows: λ=μ=μ1=μ2=13, ζn=1n+1 for all the methods; η=0.5 for method (2), method (3), and method (4); u=(0,1,0,0,0,) and ηn given by (6) is used in method (5). We plot the convergence of the sequences induced by Algorithms 1–4. The stopping condition is vn+1vn<1025 for the following two initial values:

Case (a’): v0={13n}n=1,v1={1n}n=1;

Case (b’): v0={1n2}n=1,v1={(1)nn2}n=1;

Our algorithms are effective and efficient in the sense that they are implemented easily without calculating B. It can be seen in Figure 3 and Figure 4 that the sequences obtained from our methods estimate the solution in fewer numbers of steps as compared to method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].

Remark 5.

We do not present any result on the rate of convergence of the proposed methods. In the future, it will be interesting to study and compare the convergence rate of our proposed methods and other techniques.

6. Conclusions

We present two hybrid inertial self-adaptive iterative methods for estimating the common solution of (FPP) and (SplitVIP). Two strong convergence theorems are established. Some special cases of the proposed methods are noted. We also implement our hybrid methods to explore the solution of split variational inequality problems and split common fixed-point problems. Our algorithms are simple and different in the sense that they estimate the viscosity approximation, fixed-point iteration, and inertial extrapolation in the initial steps of each iteration. Our methods are also efficient; they involve two self-adaptive step sizes and do not require the pre-estimated norm of a bounded linear operator in the iteration process. The effectiveness and efficiency of the proposed methods are illustrated by numerical examples, Examples 1 and 2. In the study of the numerical examples, it is observed that the presented methods are effective and easily implemented without any hurdle. The iterative sequence obtained by our methods estimates the common solution of (SplitVIP) and (FPP) in fewer numbers of steps in comparison to method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].

Author Contributions

Conceptualization, D.F.; methodology, M.D.; validation, M.A.; formal analysis, A.F.Y.A.; investigation, A.F.Y.A.; writing, original draft preparation, M.D. and M.A.; review and editing, M.A. and M.D.; funding acquisition, D.F. and M.D. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

All authors would like to offer thanks to the journal editor and reviewers for their fruitful suggestions and comments, which enhanced the overall quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures

Figure 1 The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (a) of Example 1.

View Image -

Figure 2 The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (b) of Example 1.

View Image -

Figure 3 The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (a′) of Example 2.

View Image -

Figure 4 The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (b′) of Example 2.

View Image -

References

1. Mann, W. Mean value methods in iteration. Am. Math. Soc.; 1953; 4, pp. 506-510. [DOI: https://dx.doi.org/10.1090/S0002-9939-1953-0054846-3]

2. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl.; 2000; 241, pp. 46-55. [DOI: https://dx.doi.org/10.1006/jmaa.1999.6615]

3. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl.; 2005; 21, pp. 2071-2084. [DOI: https://dx.doi.org/10.1088/0266-5611/21/6/017]

4. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms; 2012; 59, pp. 301-323. [DOI: https://dx.doi.org/10.1007/s11075-011-9490-5]

5. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol.; 2006; 51, pp. 2353-2365. [DOI: https://dx.doi.org/10.1088/0031-9155/51/10/001]

6. Cao, Y.; Wang, Y.; Rehman, H.; Shehu, Y.; Yao, J.C. Convergence analysis of a new forward-reflected-backward algorithm for four operators without cocoercivity. J. Optim. Theory Appl.; 2024; 203, pp. 256-284. [DOI: https://dx.doi.org/10.1007/s10957-024-02501-7]

7. Byrne, C. Iterative oblique projection onto convex sets and the split feeasiblity problems. Inverse Probl.; 2002; 18, pp. 441-453. [DOI: https://dx.doi.org/10.1088/0266-5611/18/2/310]

8. Combettes, P.L. The convex feasibilty problem in image recovery. Adv. Imaging Electron Phys.; 1996; 95, pp. 155-270.

9. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl.; 2011; 150, pp. 275-283. [DOI: https://dx.doi.org/10.1007/s10957-011-9814-6]

10. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for split common null point problem. J. Nonlinear Convex Anal.; 2012; 13, pp. 759-775.

11. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett.; 2014; 8, pp. 1113-1124. [DOI: https://dx.doi.org/10.1007/s11590-013-0629-2]

12. Akram, M.; Dilshad, M.; Rajpoot, B.F.; Ahmad, R.; Yao, J.-C. Modified iterative schemes for a fixed point problem and a split variational inclusion problem. Mathematics; 2022; 10, 2098. [DOI: https://dx.doi.org/10.3390/math10122098]

13. Abass, H.A.; Ugwunnadi, G.C.; Narain, O.K. A Modified inertial Halpern method for solving split monotone variational inclusion problems in Banach spaces. Rend. Del Circ. Mat. Palermo Ser. 2; 2023; 72, pp. 2287-2310. [DOI: https://dx.doi.org/10.1007/s12215-022-00795-y]

14. Dilshad, M.; Aljohani, A.F.; Akram, M. Iterative scheme for split variational inclusion and a fixed-point problem of a finite collection of nonexpansive mappings. J. Funct. Spaces; 2020; 2020, 3567648. [DOI: https://dx.doi.org/10.1155/2020/3567648]

15. Deepho, J.; Thounthong, P.; Kumam, P.; Phiangsungnoen, S. A new general iterative scheme for split variational inclusion and fixed point problems of k-strict pseudo-contraction mappings with convergence analysis. J. Comput. Appl. Math.; 2017; 318, pp. 293-306. [DOI: https://dx.doi.org/10.1016/j.cam.2016.09.009]

16. Sitthithakerngkiet, K.; Deepho, J.; Kumam, P. A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems. Appl. Math. Comput.; 2015; 250, pp. 986-1001. [DOI: https://dx.doi.org/10.1016/j.amc.2014.10.130]

17. Lopez, G.; Martin-M´arquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without knowledge of matrix norm. Inverse Prob.; 2012; 28, 085004. [DOI: https://dx.doi.org/10.1088/0266-5611/28/8/085004]

18. Dilshad, M.; Akram, M.; Ahmad, I. Algorithms for split common null point problem without pre-existing estimation of operator norm. J. Math. Inequal.; 2020; 14, pp. 1151-1163. [DOI: https://dx.doi.org/10.7153/jmi-2020-14-75]

19. Ezeora, J.N.; Enyi, C.D.; Nwawuru, F.O.; Richard, C.O. An algorithm for split equilibrium and fixed-point problems using inertial extragradient techniques. Comp. Appl. Math.; 2023; 42, 103. [DOI: https://dx.doi.org/10.1007/s40314-023-02244-7]

20. Tang, Y. New algorithms for split common null point problems. Optimization; 2020; 70, pp. 1141-1160. [DOI: https://dx.doi.org/10.1080/02331934.2020.1782908]

21. Tang, Y.; Gibali, A. New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algorithms; 2019; 83, pp. 305-331. [DOI: https://dx.doi.org/10.1007/s11075-019-00683-0]

22. Tang, Y.; Zhang, Y.; Gibali, A. New self-adaptive inertial-like proximal point methods for the split common null point problem. Symmetry; 2021; 13, 2316. [DOI: https://dx.doi.org/10.3390/sym13122316]

23. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear osculattor with damping. Set-Valued Anal.; 2001; 9, pp. 3-11. [DOI: https://dx.doi.org/10.1023/A:1011253113155]

24. Alamer, A.; Dilshad, M. Halpern-type inertial iteration methods with self-adaptive step size for split common null point problem. Mathematics; 2024; 12, 747. [DOI: https://dx.doi.org/10.3390/math12050747]

25. Filali, D.; Dilshad, M.; Alyasi, L.S.M.; Akram, M. Inertial Iterative Algorithms for Split Variational Inclusion and Fixed Point Problems. Axioms; 2023; 12, 848. [DOI: https://dx.doi.org/10.3390/axioms12090848]

26. Nwawuru, F.O.; Narain, O.K.; Dilshad, M.; Ezeora, J.N. Splitting method involving two-step inertial for solving inclusion and fixed point problems with applications. Fixed Point Theory Algorithms Sci. Eng.; 2025; 2025, 8. [DOI: https://dx.doi.org/10.1186/s13663-025-00781-w]

27. Reich, S.; Taiwo, A. Fast hybrid iterative schemes for solving variational inclusion problems. Math. Methods Appl. Sci.; 2023; 46, pp. 17177-17198. [DOI: https://dx.doi.org/10.1002/mma.9494]

28. Ugwunnadi, G.C.; Abass, H.A.; Aphane, M.; Oyewole, O.K. Inertial Halpern-type method for solving split feasibility and fixed point problems via dynamical stepsize in real Banach spaces. Ann. Univ. Ferrara; 2024; 70, pp. 307-330. [DOI: https://dx.doi.org/10.1007/s11565-023-00473-6]

29. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; Springer: Berlin/Heidelberg, Germany, 2011.

30. Geobel, K.; Kirk, W.A. Topics in Metric Fixed Poit Theory; Cambridge University Press: Cambridge, UK, 1990.

31. Xu, H.K. Another control condition in an iterative maethod for nonexpansive mappings. Bull. Aust. Math. Soc.; 2002; 65, pp. 109-113. [DOI: https://dx.doi.org/10.1017/S0004972700020116]

32. Opial, Z. Weak covergence of the sequence of successive approximations of nonexpansive mappings. Bull. Am. Math. Soc.; 1976; 73, pp. 591-597. [DOI: https://dx.doi.org/10.1090/S0002-9904-1967-11761-0]

33. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal.; 2008; 16, pp. 899-912. [DOI: https://dx.doi.org/10.1007/s11228-008-0102-z]

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.