Content area
Herein, we present two hybrid inertial self-adaptive iterative methods for determining the combined solution of the split variational inclusions and fixed-point problems. Our methods include viscosity approximation, fixed-point iteration, and inertial extrapolation in the initial step of each iteration. We employ two self-adaptive step sizes to compute the iterative sequence, which do not require the pre-calculated norm of a bounded linear operator. We prove strong convergence theorems to approximate the common solution of the split variational inclusions and fixed-point problems. Further, we implement our methods and results to examine split variational inequality and split common fixed-point problems. Finally, we illustrate our methods and compare them with some known methods existing in the literature.
Full text
1. Introduction
The fixed-point theory is the coherent and logical framework for indispensable nonlinear interdisciplinary problems including differential equations, control theory, game theory, variational inequality, equilibrium problems, optimization problems, split feasibility problems, etc. Over the last few years, fixed-point theory has become an active research area, which has led to the designing and development of efficient, effective, flexible, and easily implementable approximation methods for approximating the solutions of nonlinear and inverse problems. The fixed-point problem () of a self-mapping is defined by
(1)
Numerous methods have been used to address fixed-point problems. Among them, the majority of methods used to approximate the fixed points are motivated by Mann’s iterative method [1]. In order to obtain a fast convergence rate, Moudafi [2] introduced the viscosity approximation technique by blending Z with a contraction mapping.The first split problem, namely, the split feasibility problem, was initially presented by Censor and Elfving [3]. The most recent inverse problem is the split inverse problem studied by Censor et al. [4]. Because of their relevance to mathematical models of real-life problems appearing in cancer therapy [3,5], image restoration [6], computerized tomography, and data compression [7,8], several inverse problems and methods for solving them have been developed and studied in the last few years. Maudafi [9] explored the split monotone variational inclusion problems (SplitMVIP) in the framework of Hilbert spaces. Byrne et al. [10] introduced the split common null-point problem (SplitCNPP). A special case of (SplitCNPP) is the split variational inclusion problem (SplitVIP), which is defined by
where , are monotone operators, is a bounded linear operator, and is a Hilbert space. By using the fact that the zero of the monotone operator M is the fixed point of resolvent of M, that is, , Byrne et al. [10] suggested the following method for (SplitVIP):(2)
where is the adjoint of B, , , and . Based on this iterative method (2), numerous iterative methods have been developed and studied to solve (SplitVIP). Kazmi and Rizvi [11] extended method (2) to investigate the common solution of (SplitVIP) and () as follows:(3)
where F is a contraction; , is a real sequence such that , , and . Akram et al. [12] modified the method (3) in the following manner to study the same problem:(4)
where and . Some other iterative methods for solving (SplitVIP) and () can also be seen in [13,14,15,16] and references therein.The common disadvantage of these methods is the calculation of the step size, which depends on the calculation of and the calculation of is a challenging task. To address this challenge, researchers developed iterative methods that eliminate the estimation of . Lopez et al. [17] investigated split feasibility problems without knowing the norm of the matrix. Dilshad et al. [18] studied the split common null-point problem without a pre-existing estimation of the operator norm as follows:
(5)
for some fixed and(6)
In this direction, several research papers have caught the attention of researchers: see [19,20,21,22] and references therein.To obtain the fast convergence of iterative algorithms, Alvarez and Attouch [23] introduced a new algorithm for estimating the solution of variational inclusions. This algorithm was named as an inertial proximal point algorithm. It is observed that the sequence derived from the inertial proximal point method converges rapidly because of its design. As a result, numerous researchers have applied the inertial term since it plays a crucial role in accelerating the convergence; see [24,25,26,27,28] and references therein.
In continuation to the above study, our aim is to present two hybrid inertial self-adaptive iterative methods to estimate the common solution of (SplitVIP) and (), which can be summarized as follows: Our motive is to introduce fast and traditionally different viscosity methods to estimate the common solution of (SplitVIP) and (). Unlike method (3) and method (4) [or method (5)], our hybrid algorithms compute the viscosity approximation and fixed-point iteration [or Halpern-type iteration] in the initial step of each iteration. To accelerate the convergence, we also add the inertial term in the initial step of the iteration. Therefore, in the first step, we compute the inertial extrapolation, fixed-point iteration, and viscosity approximation all at the same time. In method (3) and method (4), the pre-calculated norm of B is essential, which is a tedious task. However, we are using two self-adaptive step-sizes, which do not require the pre-calculated norms of a bounded linear operator B. Our methods are efficient and an accelerated version of method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18], which is demonstrated by numerical examples.
2. Preliminaries
All through the text, we symbolize the real Hilbert space by and by , which is the close and convex subset of . We denote the strong and weak convergence of the sequence to v by and , respectively.
For all in , such that , the following equality and inequality hold:
(7)
and(8)
A mapping is said to be Averaged if there exists a nonexpansive mapping and such that ; Lipschitz continuous if there exists , such that , Contraction if , for some Nonexpansive if Firmly nonexpansive if κ-inverse strongly monotone (κ-ism) if there exists such that
Monotone if
([29]). Let be a set-valued mapping. Then, N is called monotone if ; ; N is said to be maximal monotone if N is monotone and , for , where I is an identity mapping on ; The resolvent of N is defined by , where I is an identity mapping and .
Remark 1. It can be easily seen that a κ-inverse strongly monotone mapping is also monotone and -Lipschitz continuous. Every averaged mapping is nonexpansive, but the converse need not be true in general. The operator Z is firmly nonexpansive if and only if is firmly nonexpansive. The composition of two averaged operators is also averaged.
Remark 2. The resolvent of the maximal monotone mapping M is single-valued, nonexpansive, as well as firmly nonexpansive for any . The resolvent is firmly nonexpansive if and only if
The operator is nonexpansive and so it is demiclosed at zero.
If is monotone, then and are firmly nonexpansive for , and is the resolvent of N.
([30]). Let be a closed and convex subset of Hilbert space , and is nonexpansive mapping such that The sequence and Then,
([31]). If is a sequence of non-negative real numbers satisfying
where is a sequence in and is a sequence of real numbers such thatThen,
([32]). Suppose is a closed and convex subset of If the sequence satisfies the following: exists for all , Any weak cluster point of belongs to ; then, there exists such that .
([33]). Let be a real sequence that does not decrease at infinity in the sense that there exists a subsequence of such that . Also consider the sequence of integers defined by
Then, is a nondecreasing sequence verifying and ,3. Main Results
The solution sets of (SplitVIP) and () are denoted by and , respectively. To establish the convergence of the suggested methods, we make the following assumptions: is -contraction; are monotone operators and is a nonexpansive mapping; is a sequence in so that , and ; is a positive and bounded sequence such that ; The common solution set of (SplitVIP) and () is expressed by and .
Now, we are in the position to design our hybrid Algorithm 1. The hybrid Algorithm 1 is constructed in such a way that the initial step iterates the inertial extrapolation term combined with the viscosity approximation. We implement our hybrid Algorithm 1 to estimate the common solution of (SplitVIP) and ().
| Algorithm 1. Hybrid Algorithm 1 |
| Choose , , and . Select initial points and and fix . |
| Iterative Step: For iterate , and select , where (9) |
| Compute (10) (11) (12) where and are defined by(13) and(14) |
| If , then stop; otherwise, fix and go back to the computation. |
Let in Algorithm 1; then, if , we obtain from (11) that , that is, , which implies that , which concludes that . If and , we obtain from (12) that Since B is a bounded linear operator, we obtain , that is, . If , then there is nothing to show.
From (9) and Assumption (), we have . Therefore, there exists a constant such that or .
Next, we utilize our hybrid Algorithm 1 to establish the strong convergence theorem, which approximates the common solution of (SplitVIP) and (). In the presentation of the strong convergence theorem, the implemented method computes two step sizes, which makes us to calculate the norm of the bounded linear operator B.
If assumptions hold, then the sequence induced by Algorithm 1 converges strongly to v, where .
Let . By using (8), (11) and Remark 2 (4), we have
(15)
Now, using (13), we estimate that
(16)
From (15) and (16), we obtain(17)
Since , we obtain(18)
Applying the same steps as in the calculation of (16) and (17), we can easily obtain the following:(19)
By using (14), we can obtain(20)
It follows from (19) and (20) that(21)
or(22)
Combining (17) and (21), we obtain(23)
Since , we conclude that(24)
Since F is -contraction, using (10) and Remark 4, we haveTaking advantage of (24) and by mathematical induction, we achieve that the sequence is bounded, and so are and . Let , which is also bounded. By using (8), we obtain(25)
We also estimate(26)
since . By using the above estimated values in (25) and (26), we obtain(27)
where andCombining (23) and (27), we obtain(28)
The remaining proof can be split in two possible cases:Case I: If is not monotonically increasing, then there exists a number such that for all . Hence, the boundedness of implies that is convergent. Therefore, using (28), we have
(29)
From (11) and (13), we infer that(30)
Using (12) and (14), we obtain(31)
Taking together (30) and (31), we have(32)
It is not difficult to obtain that(33)
By using , and using (32), (33), and Remark 4, we immediately see that(34)
Hence, we can obtain(35)
From (10), we can easily write that(36)
Using the boundedness of , Condition , the nonexpansive property of Z, and Remark 4, we achieve(37)
Similarly, we can show that(38)
Since is bounded, which implies the existence of subsequence converging weakly to , the subsequences , of and , respectively, also converge weakly to . It follows from (29) and (38) that(39)
Keeping in mind (30), (31), and (39), we infer that .Finally, we prove that the sequence strongly converges. From (28), we have
(40)
Furthermore,Now we are in position to apply Lemma 2 in (40) and conclude that converges strongly to . Hence, the result is proved.Case II: If is monotonically increasing, then the sequence for all defined by is increasing such that as and
(41)
By using (28), we haveBy passing the limit , we obtainUsing the same techniques as in the proof of Case I, we obtain , , , and as . From (40) and (41), we obtainThus, . By passing limit and using Lemma 4,It follows that , that is, as . This completes the proof. □Further, we construct the hybrid Algorithm 2, which is a slightly modified version of hybrid Algorithm 1. In hybrid Algorithm 2, the initial step iterates the viscosity approximation, which is the convex combination of and , where the inertial extrapolation term is added to accelerate the convergence.
| Algorithm 2. Hybrid Algorithm 2 |
| Choose , , and . Select initial points and and fix . |
| Iterative Step: For , iterate , and select , where (42) |
| Compute (43) (44) (45) where and are defined by(46) and(47) |
| If , then stop; otherwise, fix and go back to the computation. |
The following is the convergence analysis of hybrid Algorithm 2, which is similar to that of the proof of Theorem 1.
If assumptions hold, then the sequence generated by Algorithm 2 converges strongly to v, where .
Take ; then, from (43) and using Remark 4, we see that
Keeping in mind (24) and using mathematical induction, we obtain that the sequence is bounded, and so is and . Denote ; then, by using (8) and denoting , we establish that(48)
and(49)
Using (48) and (49), we obtain(50)
Combining (23) and (50), we obtain where and .Considering Case I of Theorem 1, we can easily obtain
(51)
and(52)
and, hence,(53)
Next, we show that as . As , then andThe assumption on and the boundedness of imply thatAlso,(54)
Together with (52)–(54), we obtain by taking(55)
The boundedness of , , and imply the existence of subsequence , , and , which converge to some point ; and from (51)–(53) and (55), we conclude that . The remaining proof can be obtained easily by using similar steps as in the proof of Theorem 1. □Let be arbitrary. Then, by replacing with q in hybrid Algorithm 1 and hybrid Algorithm 2, we define the following Halpern-type iterative methods, which can be seen as the particular cases of our hybrid methods:
If assumptions – hold good, then the sequence induced by Algorithm 3 converges strongly to .
| Algorithm 3. A Particular Case of Hybrid Algorithm 1 |
| Choose , , and . Select initial points and , any , and fix . |
| Iterative Step: For , iterate , , and select , where |
| Compute where and are defined byand |
| If , then stop; otherwise, fix and go back to the computation. |
Replacing in Algorithm 1 as well as in the proof of Theorem 1, we obtain the required result. □
If assumptions – hold good, then the sequence induced by Algorithm 4 converges strongly to .
| Algorithm 4. A Particular Case of Hybrid Algorithm 2 |
| Let , , and be given. Select initial points and , any , and fix . |
| Iterative Step: For , iterate , , and select , where |
| Computewhere and are defined by and |
| If , then stop; otherwise, fix and go back to the computation. |
By replacing q in place of in Algorithm 2 as well as in the proof of Theorem 2, we obtain the desired proof. □
4. Some Advantages
Some applications of the suggested methods for solving split variational inequality and split common fixed-point problems are discussed below.
4.1. Split Variational Inequality Problem
Let be a nonempty, closed, and convex subset of and be the projection on , , and , which are monotone operators, and is a bounded linear operator, then the split variational inequality problem (SplitVItP) is defined by
Then, by replacing in Algorithms 1–4, we can obtain the hybrid algorithms and their convergence results for (SplitVItP) and (FPP).4.2. Split Common Fixed-Point Problem
Let and be self-nonexpansive mappings and be a bounded linear operator; then, the split common fixed-point problem (SplitCFPP) is defined as follows:
Then, by replacing , , and , identity mapping in Algorithms 1–4, we can obtain the hybrid algorithms and their convergence results for (SplitCFPP).Next, we present numerical examples in finite and infinite dimensional Hilbert spaces, showing the efficiency of our hybrid methods and their comparison with the work studied in [10,11,12,18].
5. Numerical Examples
(Finite dimensional). Let , equipped with the inner product for and and the norm . The operators M, N, and B are defined by
such that M is -inverse strongly monotone and N is -inverse strongly monotone (hence monotone); B is a bounded and linear operator. The nonexpansive mapping Z is defined by , and is a θ-contraction with .
To run our algorithms, we select , , and , and is selected randomly from , where
We compare our algorithms and method (2), method (3), method (4), and method (5) by using the following common parameters: , for all the methods; for method (2), method (3), and method (4); and given by (6) are used in method (5). The stopping condition is , and we consider the two cases of initial values:
Case (a):
Case (b):
It can be seen that our algorithms are efficient and effective and can be implemented easily without calculating . The convergence of to is shown in Figure 1 and Figure 2 using different initial values. It is found that our algorithms approach the solution in fewer numbers of steps in comparison to the method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].
Example 2. (Infinite dimensional) Let , the space of all square summable sequences with inner product , and the norm is . The mappings M, N, and S are defined by
Clearly, M and N are monotone, Z is nonexpansive, F is a contraction, and B is a bounded linear operator. We choose , , and , and is selected randomly from , whereThe common parameters for our algorithms and method (2), method (3), method (4), and method (5) are as follows: , for all the methods; for method (2), method (3), and method (4); and given by (6) is used in method (5). We plot the convergence of the sequences induced by Algorithms 1–4. The stopping condition is for the following two initial values:Case (a’):
Case (b’):
Our algorithms are effective and efficient in the sense that they are implemented easily without calculating . It can be seen in Figure 3 and Figure 4 that the sequences obtained from our methods estimate the solution in fewer numbers of steps as compared to method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].
We do not present any result on the rate of convergence of the proposed methods. In the future, it will be interesting to study and compare the convergence rate of our proposed methods and other techniques.
6. Conclusions
We present two hybrid inertial self-adaptive iterative methods for estimating the common solution of () and (SplitVIP). Two strong convergence theorems are established. Some special cases of the proposed methods are noted. We also implement our hybrid methods to explore the solution of split variational inequality problems and split common fixed-point problems. Our algorithms are simple and different in the sense that they estimate the viscosity approximation, fixed-point iteration, and inertial extrapolation in the initial steps of each iteration. Our methods are also efficient; they involve two self-adaptive step sizes and do not require the pre-estimated norm of a bounded linear operator in the iteration process. The effectiveness and efficiency of the proposed methods are illustrated by numerical examples, Examples 1 and 2. In the study of the numerical examples, it is observed that the presented methods are effective and easily implemented without any hurdle. The iterative sequence obtained by our methods estimates the common solution of (SplitVIP) and () in fewer numbers of steps in comparison to method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].
Conceptualization, D.F.; methodology, M.D.; validation, M.A.; formal analysis, A.F.Y.A.; investigation, A.F.Y.A.; writing, original draft preparation, M.D. and M.A.; review and editing, M.A. and M.D.; funding acquisition, D.F. and M.D. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
All authors would like to offer thanks to the journal editor and reviewers for their fruitful suggestions and comments, which enhanced the overall quality of the manuscript.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 The comparison of our proposed methods with the other methods studied in [
Figure 2 The comparison of our proposed methods with the other methods studied in [
Figure 3 The comparison of our proposed methods with the other methods studied in [
Figure 4 The comparison of our proposed methods with the other methods studied in [
1. Mann, W. Mean value methods in iteration. Am. Math. Soc.; 1953; 4, pp. 506-510. [DOI: https://dx.doi.org/10.1090/S0002-9939-1953-0054846-3]
2. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl.; 2000; 241, pp. 46-55. [DOI: https://dx.doi.org/10.1006/jmaa.1999.6615]
3. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl.; 2005; 21, pp. 2071-2084. [DOI: https://dx.doi.org/10.1088/0266-5611/21/6/017]
4. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms; 2012; 59, pp. 301-323. [DOI: https://dx.doi.org/10.1007/s11075-011-9490-5]
5. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol.; 2006; 51, pp. 2353-2365. [DOI: https://dx.doi.org/10.1088/0031-9155/51/10/001]
6. Cao, Y.; Wang, Y.; Rehman, H.; Shehu, Y.; Yao, J.C. Convergence analysis of a new forward-reflected-backward algorithm for four operators without cocoercivity. J. Optim. Theory Appl.; 2024; 203, pp. 256-284. [DOI: https://dx.doi.org/10.1007/s10957-024-02501-7]
7. Byrne, C. Iterative oblique projection onto convex sets and the split feeasiblity problems. Inverse Probl.; 2002; 18, pp. 441-453. [DOI: https://dx.doi.org/10.1088/0266-5611/18/2/310]
8. Combettes, P.L. The convex feasibilty problem in image recovery. Adv. Imaging Electron Phys.; 1996; 95, pp. 155-270.
9. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl.; 2011; 150, pp. 275-283. [DOI: https://dx.doi.org/10.1007/s10957-011-9814-6]
10. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for split common null point problem. J. Nonlinear Convex Anal.; 2012; 13, pp. 759-775.
11. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett.; 2014; 8, pp. 1113-1124. [DOI: https://dx.doi.org/10.1007/s11590-013-0629-2]
12. Akram, M.; Dilshad, M.; Rajpoot, B.F.; Ahmad, R.; Yao, J.-C. Modified iterative schemes for a fixed point problem and a split variational inclusion problem. Mathematics; 2022; 10, 2098. [DOI: https://dx.doi.org/10.3390/math10122098]
13. Abass, H.A.; Ugwunnadi, G.C.; Narain, O.K. A Modified inertial Halpern method for solving split monotone variational inclusion problems in Banach spaces. Rend. Del Circ. Mat. Palermo Ser. 2; 2023; 72, pp. 2287-2310. [DOI: https://dx.doi.org/10.1007/s12215-022-00795-y]
14. Dilshad, M.; Aljohani, A.F.; Akram, M. Iterative scheme for split variational inclusion and a fixed-point problem of a finite collection of nonexpansive mappings. J. Funct. Spaces; 2020; 2020, 3567648. [DOI: https://dx.doi.org/10.1155/2020/3567648]
15. Deepho, J.; Thounthong, P.; Kumam, P.; Phiangsungnoen, S. A new general iterative scheme for split variational inclusion and fixed point problems of k-strict pseudo-contraction mappings with convergence analysis. J. Comput. Appl. Math.; 2017; 318, pp. 293-306. [DOI: https://dx.doi.org/10.1016/j.cam.2016.09.009]
16. Sitthithakerngkiet, K.; Deepho, J.; Kumam, P. A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems. Appl. Math. Comput.; 2015; 250, pp. 986-1001. [DOI: https://dx.doi.org/10.1016/j.amc.2014.10.130]
17. Lopez, G.; Martin-M´arquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without knowledge of matrix norm. Inverse Prob.; 2012; 28, 085004. [DOI: https://dx.doi.org/10.1088/0266-5611/28/8/085004]
18. Dilshad, M.; Akram, M.; Ahmad, I. Algorithms for split common null point problem without pre-existing estimation of operator norm. J. Math. Inequal.; 2020; 14, pp. 1151-1163. [DOI: https://dx.doi.org/10.7153/jmi-2020-14-75]
19. Ezeora, J.N.; Enyi, C.D.; Nwawuru, F.O.; Richard, C.O. An algorithm for split equilibrium and fixed-point problems using inertial extragradient techniques. Comp. Appl. Math.; 2023; 42, 103. [DOI: https://dx.doi.org/10.1007/s40314-023-02244-7]
20. Tang, Y. New algorithms for split common null point problems. Optimization; 2020; 70, pp. 1141-1160. [DOI: https://dx.doi.org/10.1080/02331934.2020.1782908]
21. Tang, Y.; Gibali, A. New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algorithms; 2019; 83, pp. 305-331. [DOI: https://dx.doi.org/10.1007/s11075-019-00683-0]
22. Tang, Y.; Zhang, Y.; Gibali, A. New self-adaptive inertial-like proximal point methods for the split common null point problem. Symmetry; 2021; 13, 2316. [DOI: https://dx.doi.org/10.3390/sym13122316]
23. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear osculattor with damping. Set-Valued Anal.; 2001; 9, pp. 3-11. [DOI: https://dx.doi.org/10.1023/A:1011253113155]
24. Alamer, A.; Dilshad, M. Halpern-type inertial iteration methods with self-adaptive step size for split common null point problem. Mathematics; 2024; 12, 747. [DOI: https://dx.doi.org/10.3390/math12050747]
25. Filali, D.; Dilshad, M.; Alyasi, L.S.M.; Akram, M. Inertial Iterative Algorithms for Split Variational Inclusion and Fixed Point Problems. Axioms; 2023; 12, 848. [DOI: https://dx.doi.org/10.3390/axioms12090848]
26. Nwawuru, F.O.; Narain, O.K.; Dilshad, M.; Ezeora, J.N. Splitting method involving two-step inertial for solving inclusion and fixed point problems with applications. Fixed Point Theory Algorithms Sci. Eng.; 2025; 2025, 8. [DOI: https://dx.doi.org/10.1186/s13663-025-00781-w]
27. Reich, S.; Taiwo, A. Fast hybrid iterative schemes for solving variational inclusion problems. Math. Methods Appl. Sci.; 2023; 46, pp. 17177-17198. [DOI: https://dx.doi.org/10.1002/mma.9494]
28. Ugwunnadi, G.C.; Abass, H.A.; Aphane, M.; Oyewole, O.K. Inertial Halpern-type method for solving split feasibility and fixed point problems via dynamical stepsize in real Banach spaces. Ann. Univ. Ferrara; 2024; 70, pp. 307-330. [DOI: https://dx.doi.org/10.1007/s11565-023-00473-6]
29. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; Springer: Berlin/Heidelberg, Germany, 2011.
30. Geobel, K.; Kirk, W.A. Topics in Metric Fixed Poit Theory; Cambridge University Press: Cambridge, UK, 1990.
31. Xu, H.K. Another control condition in an iterative maethod for nonexpansive mappings. Bull. Aust. Math. Soc.; 2002; 65, pp. 109-113. [DOI: https://dx.doi.org/10.1017/S0004972700020116]
32. Opial, Z. Weak covergence of the sequence of successive approximations of nonexpansive mappings. Bull. Am. Math. Soc.; 1976; 73, pp. 591-597. [DOI: https://dx.doi.org/10.1090/S0002-9904-1967-11761-0]
33. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal.; 2008; 16, pp. 899-912. [DOI: https://dx.doi.org/10.1007/s11228-008-0102-z]
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.