Content area
This paper introduces a novel Picard-type iterative algorithm for solving general variational inequalities in real Hilbert spaces. The proposed algorithm enhances both the theoretical framework and practical applicability of iterative algorithms by relaxing restrictive conditions on parametric sequences, thereby expanding their scope of use. We establish convergence results, including a convergence equivalence with a previous algorithm, highlighting the theoretical relationship while demonstrating the increased flexibility and efficiency of the new approach. The paper also addresses gaps in the existing literature by offering new theoretical insights into the transformations associated with variational inequalities and the continuity of their solutions, thus paving the way for future research. The theoretical advancements are complemented by practical applications, such as the adaptation of the algorithm to convex optimization problems and its use in real-world contexts like machine learning. Numerical experiments confirm the proposed algorithm’s versatility and efficiency, showing superior performance and faster convergence compared to an existing method.
Full text
1. Introduction
In this paper, we adopt the standard notation for a real Hilbert space . The inner product on is denoted by , and the associated norm is represented by . Let H denote a nonempty, closed, and convex subset of , and let T, be two nonlinear operators. The operator is called
(i). λ-Lipschitzian if there exists a constant , such that
(1)
(ii). Nonexpansive if
(2)
(iii). α-inverse strongly monotonic if there exists a constant , such that
(3)
(iv). r-strongly monotonic if there exists a constant , such that
(iv). Relaxed cocoercive if there exist constants and , such that
(4)
It is evident that the classes of -inverse strongly monotonic and r-strongly monotonic mappings are subsets of the class of relaxed cocoercive mappings; however, the reverse implication does not hold.
Let and . Clearly, is a Hilbert space with norm induced by the inner product . Define the operator by .
We demonstrate that T is relaxed cocoercive with and . Specifically, we aim to verify that for all ,
First, note that , and
Combining the terms, for all , we see that
Indeed, putting , we conclude that
because for all . Thus, T is relaxed cocoercive.
Since for all , we conclude that there is no positive constant α, such that (3) holds. Thus, the operator T is not α-inverse strongly monotonic. Also, it is not an r-strongly monotonic.
The theory of variational inequalities, initially introduced by Stampacchia [1] in the context of obstacle problems in potential theory, provides a powerful framework for addressing a broad spectrum of problems in both pure and applied sciences. Stampacchia’s pioneering work revealed that the minimization of differentiable convex functions associated with such problems can be characterized by inequalities, thus establishing the foundation for variational inequality theory. The classical variational inequality problem (VIP) is commonly stated as follows:
Find such that
(5)
where is a given operator. The VI (5) and its solution set are denoted by VI and , respectively.Lions and Stampacchia [2] further expanded this theory, demonstrating its deep connections to other classical mathematical results, including the Riesz–Fréchet representation theorem and the Lax–Milgram lemma. Over time, the scope of variational inequality theory has been extended and generalized, becoming an indispensable tool for the analysis of optimization problems, equilibrium systems, and dynamic processes in a variety of fields. The historical development of variational principles, with contributions from figures such as Euler, Lagrange, Newton, and the Bernoulli brothers, highlights their profound impact on the mathematical sciences. These principles serve as the foundation for solving maximum and minimum problems across diverse disciplines such as mechanics, game theory, economics, general relativity, transportation, and machine learning. Both classical and contemporary studies emphasize the importance of variational methods in solving differential equations, modeling physical phenomena, and formulating unified theories in elementary particle physics. The remarkable versatility of variational inequalities stems from their ability to provide a generalized framework for tackling a wide range of problems, thereby advancing both theoretical insights and computational techniques.
Consequently, the theory of variational inequalities has garnered significant attention over the past three decades, with substantial efforts directed towards its development in various directions [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Building on this rich foundation, Noor [18] introduced a significant extension of variational inequalities known as the general nonlinear variational inequality (GNVI), formulated as follows:
Find such that
(6)
The GNVI (6) and its solution set are denoted by GNVI and , respectively. It has been shown in Ref. [18] that problem (6) reduces to VI, where (the identity operator). Furthermore, the GNVI problem can be reformulated as a general nonlinear complementarity problem:
Find such that
(7)
where is the dual cone of a convex cone H in . For , where m is point-to-point mapping, problem (7) corresponds to the implicit (quasi-)complementarity problem.A wide range of problems arising in various branches of pure and applied sciences have been studied within the unified framework of the GNVI problem (6) (see Refs. [1,19,20,21]). As an illustration of its application in differential equation theory, Noor [22] successfully formulated and studied the following third-order implicit obstacle boundary value problem:
Find such that on
where is an obstacle function and is a continuous function.
The projection operator technique enables the establishment of an equivalence between the variational inequality VI and fixed-point problems, as follows:
Let be a projection (which is also nonexpansive). For given , the condition
is equivalent to . This implies that
where is a constant.
Applying this lemma to the GNVI problem (6), Noor [18] derived the following equivalence result, which establishes a connection between the GNVI and fixed-point problems:
Let be a projection (which is also nonexpansive). A function satisfies the GNVI problem (6) if and only if it satisfies the relation
(8)
where is a constant.This equivalence has played a crucial role in the development of efficient methods for solving GNVI problems and related optimization problems. Noor [22] showed that the relation (8) can be rewritten as
which implies that(9)
where is a nonexpansive operator and denotes the set of fixed points of S.Numerous iterative methods have been proposed for solving variational inequalities and variational inclusions [22,23,24,25,26,27,28,29,30,31]. Among these, Noor [22] introduced an iterative algorithm based on the fixed-point formulation (9) to find a common solution to both the general nonlinear variational inequality GNVI and the fixed-point problem. The algorithm is described as follows:
(10)
where , , and .The convergence of this algorithm was established in [22] under the following conditions:
Let be a relaxed cocoercive and λ-Lipschitzian mapping, be a relaxed cocoercive and -Lipschitzian mapping, and be a nonexpansive mapping such that . Define as the sequence generated by the algorithm in (10), with real sequences , , and , where . Suppose the following conditions are satisfied
where
Then, converges strongly to a solution .
Noor’s algorithm in (10) and its variants have been widely studied and applied to variational inclusions, variational inequalities, and related optimization problems. These algorithms are recognized for their efficiency and flexibility, contributing significantly to the field of variational inequalities. However, there remains considerable potential for developing more robust and broadly applicable iterative algorithms for solving GNVI problems. Motivated by the limitations of existing methods, we propose a novel Picard-type iterative algorithm designed to address general variational inequalities and nonexpansive mappings:
(11)
where and . Algorithm (11) cannot be directly derived from (10) because the update rule for differs fundamentally. In the first algorithm, is updated as whereas in the second algorithm, the update for follows a direct convex combination of previous iterates:This structural difference in the update step leads to different iterative behaviors, making it impossible to derive (11) directly from (10).Building on these methodological advances, recent research has significantly deepened our understanding of variational inequalities by offering innovative frameworks and solution techniques that address real-world challenges.
The literature has seen significant advancements in variational inequality theory through seminal contributions that extend its applicability to diverse practical problems. Nagurney [32] laid the groundwork by establishing a comprehensive framework for modeling complex network interactions, which has served as a cornerstone for subsequent research in optimization and equilibrium analysis. Also, it is interesting to see a collection of papers presented in the book [33], mainly from the 3rd International Conference on Dynamics of Disasters (Kalamata, Greece, 5–9 July 2017), offering valuable strategies for optimizing resource allocation under emergency conditions. More recently, Fargetta, Maugeri, and Scrimali [34] expanded the scope of variational inequality methods by formulating a stochastic Nash equilibrium framework to analyze competitive dynamics in medical supply chains, thereby addressing challenges in healthcare logistics.
These developments underscore the dynamic evolution of variational inequality research and its capacity to address complex, real-world problems. In this context, the new Picard-type iterative algorithm proposed in our study builds upon these advances by relaxing constraints on parameter sequences, ultimately providing a more flexible and efficient approach for solving general variational inequalities.
In Section 2, we establish a strong convergence result (Theorem 2) for the proposed algorithm. Unlike Noor’s algorithm, which requires specific conditions on the parametric sequences for convergence, our algorithm eliminates this requirement while maintaining strong convergence properties. Specifically, Theorem 2 refines the convergence criteria in Theorem 1, leading to broader applicability and enhanced theoretical robustness. Furthermore, Theorem 3 demonstrates the equivalence in convergence between the algorithms in (10) and (11), highlighting their inter-relationship and the efficiency of our approach. The introduction of the Collage–Anticollage Theorem 4 within the context of variational inequalities marks a significant innovation, offering a novel perspective on transformations related to the GNVI problem discussed in (6). To the best of our knowledge, this theorem is presented for the first time in this setting. Additionally, Theorems 5 and 6 explore the continuity of solutions to variational inequalities, a topic rarely addressed in the existing literature. These contributions extend the theoretical framework established by Noor [22], offering new insights into general nonlinear variational inequalities. Beyond theoretical advancements, we validate the practical utility of the proposed algorithm by applying it to convex optimization problems and real-world scenarios. Section 3 provides a modification of the algorithm for solving convex minimization problems, supported by numerical examples. In Section 4, we demonstrate the algorithm’s applicability in real-world contexts, including machine learning tasks such as classification and regression. Comparative analysis shows that our algorithm consistently converges to optimal solutions in fewer iterations than the algorithm in (10), highlighting its superior computational efficiency and practical advantages.
The development of the main results in this paper relies on the following lemmas:
([35]). Let for be non-negative sequences of real numbers satisfying
where and . Then, .([36]). Let for be non-negative real sequences satisfying the following inequality
where for all , , and . Then, .2. Main Results
Let be a relaxed cocoercive and -Lipschitz operator, be a relaxed cocoercive and -Lipschitz operator, and be a nonexpansive mapping such that . Let be an iterative sequence defined by the algorithm in (11) with real sequences , . Assume the following conditions hold
(12)
where(13)
Then, the sequence converges strongly to with the following estimate for each ,where
(14)
Let be a solution to . Then,
(15)
Using (11), (15), and the assumptions that and S are nonexpansive operators, we obtain(16)
Since T is a relaxed cocoercive and -Lipschitzian operator, or equivalently(17)
where is defined by (14).Since g is a relaxed cocoercive and -Lipschitzian operator,
(18)
where L is defined by (13).Combining (16), (17), and (18), we have
(19)
and from (12) and (14), we know that (see Appendix A).It follows from (11), (15), and the nonexpansivity of the operators S and that
(20)
Using the same arguments above gives us the following estimates(21)
where .Combining (19)–(21), we obtain
(22)
As , for all and , we have for all . Using this fact in (22), we obtain which impliesTaking the limit as , we conclude that . □Let , H, T, g, S, L, and δ be defined as in Theorem 2, and let the iterative sequences and be generated by (10) and (11), respectively. Assume the conditions in (12) hold, and , , and . Then, the following assertions are true:
(i) If converges strongly to , then also converges strongly to 0. Moreover, the estimate holds for all ,
Furthermore, the sequence converges strongly to .(ii) If the sequence is bounded and , then the sequence converges strongly to 0. Additionally, the estimate holds for all
Moreover, the sequence converges strongly to .(i) Suppose that converges strongly to . We aim to show that converges strongly to 0. Using (1), (2), (4), (10), (11), and (15), we deduce the following inequalities
as well asCombining these inequalities, we get(23)
Since , , and , for all , we have
(24)
By applying the inequalities in (24) to (23), we derive the following result(25)
Define , , and , for all . Given the assumption , it follows that . It is straightforward to verify that (25) satisfies the conditions of Lemma 3. By applying the conclusion of Lemma 3, we obtain . Furthermore, we note the following inequality for all ,
Taking the limit as , we conclude that , since(ii) Let us assume that the sequence is bounded and . By Theorem 2, it follows that . We now demonstrate that the sequence converges strongly to . Utilizing results from (1), (2), (4), (10), (11), and (15), we derive the following inequalities:
(26)
(27)
(28)
From the proof of Theorem 2, we know that
(29)
Combining (26)–(29), we obtain(30)
Since , , and , for all , we have(31)
Applying the inequalities in (31) to (30) gives(32)
Now, we define the sequences ,
for all . Note that .Since the sequence is bounded, there exists such that for all . For any , since converges to 0 and , there exists such that for all . Consequently, for all , which implies , i.e., . Thus, inequality (32) satisfies the requirements of Lemma 4, and by its conclusion, we deduce that . Since and
it follows that . □After establishing the strong convergence properties of our proposed algorithm in Theorems 2 and 3, we now present additional results that further illustrate the robustness and practical applicability of our approach. In the following theorems, we first quantify the error estimate between an arbitrary point and the solution via the operator , and then we explore the relationship between and its approximation .
Specifically, Theorem 4 provides rigorous bounds linking the error to the distance between any point and a solution , thereby offering insights into the stability of the method. Building on this result, Theorem 5 establishes an upper bound on the distance between the fixed point of the exact operator and that of its approximation . Finally, Theorem 6 delivers a direct error bound in terms of a prescribed tolerance , which is particularly useful for practical implementations.
Let , H, T, g, S, L, and δ be as defined in Theorem 2, and suppose the conditions in (12) are satisfied. Then, for any solution and for any , the following inequalities hold:
(33)
where the operator is defined as .From equation (9), we know that . If , inequality (33) is trivially satisfied. On the other hand, if for all , we have
(34)
as well as(35)
(36)
Inserting (35) and (36) into (34), we obtain or equivalentlyOn the other hand, we have
By employing similar arguments as in (34)–(36), we deduce
or equivalentlyCombining the bounds derived from (34)–(36), we finally arrive at
which completes the proof. □Transitioning from error estimates for the exact operator, Theorem 5 shifts the focus to the interplay between the original operator and its approximation . This theorem establishes an upper bound for the distance between their respective fixed points, thus providing a measure of how closely the approximation tracks the behavior of the exact operator. The theorem is stated as follows:
Let T, g, S, Φ, L, and δ be as defined in Theorem 4. Assume that is a map with a fixed point . Further, suppose the conditions in (12) are satisfied. Then, for a solution , the following holds
(37)
By (9), we know that . If , then inequality (37) is directly satisfied. If , then using the same arguments as in the proof of Theorem 4, we obtain
(38)
as well as(39)
(40)
Combining (38)–(40), we deriveSimplifying further, this yields which completes the proof. □Finally, Theorem 6 extends this analysis by providing a direct error bound in terms of a prescribed tolerance . This result is particularly valuable for practical implementations, as it offers a clear metric for the performance of the approximating operator in approximating the fixed point of . The theorem is formulated as follows:
Let T, g, S, Φ, , L, and δ be as defined in Theorem 5. Let be a map with a fixed point . Suppose the conditions stated in (12) hold. Additionally, assume that
(41)
for some fixed . Then, for a fixed point , such that , the following inequality holdsLet . From (38)–(40), we have
Then, using this inequality, as well as (37) and (41), we obtain which had to be proven. □3. An Application to the Convex Minimization Problem
Let be a Hilbert space, H be a closed and convex subset of , and be a convex function. The problem of finding the minimums of f is referred to as the convex minimization problem, which is formulated as follows
(42)
Denote the set of solutions to the minimization problem (42) by ℵ. The minimization problem (42) can equivalently be expressed as a fixed-point problem:A point is a solution to the minimization problem if and only if , where is the metric projection, denotes the gradient of the Fréchet differentiable function f, and is a constant.
Moreover, the minimization problem (42) can also be reformulated as a variational inequality problem:
A point is a solution to the minimization problem if and only if satisfies the variational inequality for all .
Now, let be a nonexpansive operator, and let represent the set of fixed points of S. If , then for any , the following holds:
since is both the solution to the problem (42) and a fixed point of S.Based on these observations, if we set (the identity operator) and in the iterative algorithm (11), we derive the following algorithm, which converges to a point that is both a solution to the minimization problem (42) and a fixed point of S:
(43)
where , .Let S, L, and δ be defined as in Theorem 2 and . Let be a convex mapping such that its gradient is a relaxed cocoercive and λ-Lipschitz mapping from H to . Assume that . Define the iterative sequence by the algorithm in (43) with real sequences , . In addition to the condition (12) in Theorem 2, assume the following condition is satisfied
(44)
Then, the sequence converges strongly to , and the following estimate holds
Set and in Theorem 2. The mapping g is 1-Lipschitzian and a relaxed cocoercive for every , satisfying the condition (44). Consequently, by Theorem 2, it follows that . □
Let
denote a real Hilbert space equipped with the norm , where for . Additionally, the set is a closed and convex subset of .
Now, we consider a function defined by , where . The solution to the minimization problem (42) is the zero vector, for f.
From [37] (Theorem 2.4.1, p. 167), the Fréchet derivative of f at a point is , which is unique. For , we have
from which we deduce This means that is a relaxed cocoercive operator. Additionally, since is a 14-Lipschitz function.Let be defined by . The operator S is nonexpansive since
Moreover, . Based on assumptions (12) and (44), we set , , and . Consequently, we calculate and , which yields . Also, we have . It is evident that these parameter choices satisfy conditions (12) and (44).
Next, let for all n. To ensure clarity, we denote a sequence of elements in the Hilbert space H as , where . Under these notations, the iterative algorithms defined in (43) and (11) are reformulated as follows:
(45)
and(46)
where is defined by , when , and , when .Let the initial point for both iterative processes be the sequence . From Table 1 and Table 2, as well as from Figure 1, it is evident that both algorithms (45) and (46) converge strongly to the point . Furthermore, the algorithm in (45) exhibits faster convergence compared to the algorithm in (46).
As a prototype, consider the mapping Φ, , be defined as
and . With these definitions, the results of Theorems 4, 5, and 6 can be straightforwardly verified. All computations in this example were performed using4. Numerical Experiments
In this section, we adapt and apply the iterative algorithm (11) within the context of machine learning to demonstrate the practical significance of the theoretical results derived in this study. By doing so, we highlight the real-world applicability of the proposed methods beyond their theoretical foundations. Furthermore, we compare the performance of algorithm (11) with algorithm (10), providing additional support for the validity of the theorems presented in previous sections.
Our focus is on the framework of loss minimization in machine learning, employing two novel projected gradient algorithms to solve related optimization problems. Specifically, we consider a regression/classification setup characterized by a dataset consisting of m samples and d attributes, represented as , with corresponding outcomes (labels) Y. The optimization problem is formulated as follows:
Using the -projection operator (onto the positive quadrant), , , , and , we define two iterative algorithms:
(47)
and(48)
where , , and . To compute the optimal value of the step size , a backtracking algorithm is employed. All numerical implementations and simulations were carried out usingThe real-world datasets used in this study are: +. Aligned Dataset (in Swarm Behavior): Swarm behavior refers to the collective dynamics observed in groups of entities such as birds, insects (e.g., ants), fish, or animals moving cohesively in large masses. These entities exhibit synchronized motion at the same speed and direction while avoiding mutual interference. The Aligned dataset comprises pre-classified data relevant to swarm behavior, including 24,017 instances with 2400 attributes (see +. COVID-19 Dataset: COVID-19, an ongoing viral epidemic, primarily causes mild to moderate respiratory infections but can lead to severe complications, particularly in elderly individuals and those with underlying conditions such as cardiovascular disease, diabetes, chronic respiratory illnesses, and cancer. The dataset is a digitized collection of patient records detailing symptoms, medical history, and risk classifications. It is designed to facilitate predictive modeling for patient risk assessment, resource allocation, and medical device planning. This dataset includes 1,048,576 instances with 21 attributes (see +. Predict Diabetes Dataset: Provided by the National Institute of Diabetes and Digestive and Kidney Diseases (USA), this dataset contains diagnostic metrics for determining the presence of diabetes. It consists of 768 instances with 9 attributes, enabling the development of predictive models for diabetes diagnosis (see +. Sobar Dataset: The Sobar dataset focuses on factors related to cervical cancer prevention and management. It includes both personal and social determinants, such as perception, motivation, empowerment, social support, norms, attitudes, and behaviors. The dataset comprises 72 instances with 20 attributes (see
The methodology for dataset analysis and model evaluation was carried out as follows:
All datasets were split into training and testing subsets. During the analysis, we set the tolerance value (i.e., the difference between two successive function values) to and capped the maximum number of iterations at . To evaluate the performance of algorithms on these datasets, we recorded the following metrics: Function values ; The norm of the difference between the optimal function value and the function values at each iteration, i.e., ; Computation times (in seconds); Prediction and test accuracies, measured using root mean square error (rMSE).
The results and observations are as follows: Function Values: In Figure 2, the function values for the evaluated algorithms are presented. Convergence Analysis: Figure 3 demonstrates the convergence performance of the algorithms in terms of . Prediction Accuracy: Figure 4 show cases the prediction accuracy (rMSE) achieved by the algorithms during the testing phase.
The results, as illustrated in Figure 2, Figure 3 and Figure 4 and summarized in Table 3, clearly indicate that algorithm (47) outperforms algorithm (48) in terms of efficiency and accuracy.
Table 3 clearly demonstrates that algorithm (47) yields significantly better results than algorithm (48) across various datasets. In terms of the number of iterations, (47) converges in far fewer steps (for example, 135 versus 2559 for the Aligned dataset and 116 versus 10,480 for the Diabetes dataset) and achieves the same or lower minimum F values, resulting in superior outcomes. Moreover, (47) shows slightly lower training errors (rMse) and, in most cases—with the exception of COVID-19, where (48) achieves a marginally better test error—comparable or improved test errors (rMse2). Most importantly, the training time for (47) is significantly shorter across all datasets (for instance, 6.28 s versus 118.14 s for the Aligned dataset and 0.022 s versus 2.83 s for the Diabetes dataset), which confers an advantage in computational efficiency. Overall, these results demonstrate that algorithm (47) not only converges faster but also delivers better performance in terms of both accuracy and computational cost compared to algorithm (48).
5. Conclusions
This study presents the development of a novel Picard-S hybrid iterative algorithm designed to address general variational inequalities and nonexpansive mappings within real Hilbert spaces. By relaxing the stringent constraints traditionally imposed on parametric sequences, the proposed algorithm achieves enhanced flexibility and broader applicability while retaining its strong convergence properties. This advancement not only bridges gaps in the existing theoretical framework but also establishes a robust equivalence between the new method and a previously established algorithm, demonstrating its consistency and efficacy. One of the key contributions of this work is the integration of the Collage–Anticollage Theorem, which provides an innovative perspective on transformations associated with general nonlinear variational inequalities (GNVI). This theorem, explored for the first time in this context, enriches the theoretical toolkit for analyzing and solving variational inequalities. The study also delves into the continuity properties of solutions to variational inequalities, addressing a rarely discussed yet crucial aspect of these problems, thereby offering a more holistic approach to their resolution. Numerical experiments conducted as part of this research validate the proposed algorithm’s superior performance. In comparison to an existing algorithm, the new algorithm consistently converges to optimal solutions with fewer iterations, underscoring its computational efficiency and practical advantages. Applications in areas such as convex optimization and machine learning further highlight its versatility. For example, the algorithm has shown promise in solving real-world problems related to classification, regression, and large-scale optimization tasks, solidifying its relevance in both theoretical and applied domains.
Conceptualization, M.E., F.G. and G.V.M.; data curation, E.H. and M.E.; methodology, M.E., F.G. and G.V.M.; formal analysis, E.H., M.E., F.G. and G.V.M.; investigation, E.H., M.E., F.G. and G.V.M.; resources, E.H., M.E., F.G. and G.V.M.; writing—original draft preparation, M.E. and F.G.; writing—review and editing, E.H., M.E., F.G. and G.V.M.; visualization, E.H., M.E., F.G. and G.V.M.; supervision, F.G., M.E. and G.V.M.; project administration, G.V.M.; funding acquisition, E.H., M.E. and F.G. All authors have read and agreed to the published version of the manuscript.
Data are contained within the article.
The authors declares no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 Graph in
Figure 2 Comparison of the efficiency of algorithms (
Figure 3 Comparison of the efficiency of algorithms (
Figure 4 Comparison of the efficiency of algorithms (
Convergence behavior of algorithm (
| n | | |
|---|---|---|
| 0 | | |
| 1 | | |
| 10 | | |
| 100 | | |
| 500 | | |
| 1000 | | |
| 2000 | | |
| | | |
| ∞ | | |
Convergence behavior of algorithm (
| n | | |
|---|---|---|
| 0 | | |
| 1 | | |
| 10 | | |
| 100 | | |
| 500 | | |
| 1000 | | |
| 2000 | | |
| | | |
| ∞ | | |
Comparison of the efficiency of algorithms (
| Aligned | Diabetes | |||
|---|---|---|---|---|
| Algorithm ( | Algorithm ( | Algorithm ( | Algorithm ( | |
| # of iterations | 135 | 2559 | 116 | 10,480 |
| Min F value | 633.5152581 | 633.5407101 | 46.31812832 | 47.514786 |
| rMse (Train.) | 0.355915454 | 0.355922962 | 0.345908859 | 0.3504142 |
| rMse2 (Test) | 0.255097007 | 0.254559006 | 0.272919936 | 0.2745554 |
| Train. time (s) | 6.282856 | 118.1402858 | 0.0217341 | 2.8294408 |
| COVID-19 | Sobar | |||
| Algorithm ( | Algorithm ( | Algorithm ( | Algorithm ( | |
| # of iterations | 64,173 | 100,000 | 542 | 2817 |
| Min F value | 449.786 | 490.524 | 4.2837252 | 4.668452 |
| rMse (Train.) | 0.2994 | 0.3132 | 0.3428808 | 0.358791 |
| rMse2 (Test) | 0.16943 | 0.15862 | 0.288059 | 0.295836 |
| Train. time (s) | 187.705 | 420.37 | 0.4419386 | 1.013948 |
Appendix A
Let L and
Using (
1. Stampacchia, G. Formes bilinearies coercivities sur les ensembles convexes. C. R. Acad. Sci. Paris; 1964; 258, pp. 4413-4416.
2. Lions, J.; Stampacchia, G. Variational inequalities. Commun. Pure Appl. Math.; 1967; 20, pp. 493-519. [DOI: https://dx.doi.org/10.1002/cpa.3160200302]
3. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic Press: New York, NY, USA, 1980.
4. Glowinski, R.; Lions, J.L.; Trémolières, R. Numerical Analysis of Variational Inequalities; North-Holland: Amsterdam, The Netherlands, 1981.
5. Giannessi, F.; Maugeri, A. (Eds.) Variational Inequalities and Network Equilibrium Problems; Springer: New York, NY, USA, 1995.
6. Atalan, Y.; Hacıoğlu, E.; Ertürk, M.; Gürsoy, F.; Milovanović, G.V. Novel algorithms based on forward-backward splitting technique: Effective methods for regression and classification. J. Glob. Optim.; 2024; 90, pp. 869-890. [DOI: https://dx.doi.org/10.1007/s10898-024-01425-w]
7. Gürsoy, F.; Hacıoğlu, E.; Karakaya, V.; Milovanović, G.V.; Uddin, I. Variational inequality problem involving multivalued nonexpansive mapping in CAT(0) Spaces. Results Math.; 2022; 77, 131. [DOI: https://dx.doi.org/10.1007/s00025-022-01663-y]
8. Keten Çopur, A.; Hacıoğlu, E.; Gürsoy, F.; Ertürk, M. An efficient inertial type iterative algorithm to approximate the solutions of quasi variational inequalities in real Hilbert spaces. J. Sci. Comput.; 2021; 89, 50. [DOI: https://dx.doi.org/10.1007/s10915-021-01657-y]
9. Gürsoy, F.; Ertürk, M.; Abbas, M. A Picard-type iterative algorithm for general variational inequalities and nonexpansive mappings. Numer. Algorithms; 2020; 83, pp. 867-883. [DOI: https://dx.doi.org/10.1007/s11075-019-00706-w]
10. Atalan, Y. On a new fixed point iterative algorithm for general variational inequalities. J. Nonlinear Convex Anal.; 2019; 20, pp. 2371-2386.
11. Maldar, S. Iterative algorithms of generalized nonexpansive mappings and monotone operators with application to convex minimization problem. Symmetry; 2022; 14, pp. 1841-1868. [DOI: https://dx.doi.org/10.1007/s12190-021-01593-y]
12. Maldar, S. New parallel fixed point algorithms and their application to a system of variational inequalities. J. Appl. Math. Comput.; 2022; 68, 1025. [DOI: https://dx.doi.org/10.3390/sym14051025]
13. Konnov, I.V. Combined relaxation methods for variational inequalities. Lecture Notes in Mathematical Economics; Springer: Berlin/Heidelberg, Germany, 2000.
14. Facchinei, F.; Pang, J.-S. Finite Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, Berlin/Heidelberg, Germany, 2003; Volumes I and II.
15. Giannessi, F.; Maugeri, A. (Eds.) Variational Analysis and Applications; Springer: New York, NY, USA, 2005.
16. Ansari, Q.H. (Ed.) Topics in Nonlinear Analysis and Optimization; World Education: Delhi, India, 2012.
17. Ansari, Q.H.; Lalitha, C.S.; Mehta, M. Generalized Convexity. Nonsmooth Variational Inequalities and Nonsmooth Optimization; CRC Press: Boca Raton, FL, USA, London, UK, New York, NY, USA, 2014.
18. Noor, M.A. General variational inequalities. Appl. Math. Lett.; 1988; 1, pp. 119-122. [DOI: https://dx.doi.org/10.1016/0893-9659(88)90054-7]
19. Noor, M.A. Variational inequalities in physical oceanography. Ocean Waves Engineering, Advances in Fluid Mechanics; Rahman, M. WIT Press: Southampton, UK, 1994; Volume 2.
20. Bnouhachem, A.; Liu, Z.B. Alternating direction method for maximum entropy subject to simple constraint sets. J. Math. Anal. Appl.; 2004; 121, pp. 259-277. [DOI: https://dx.doi.org/10.1023/B:JOTA.0000037405.55660.a4]
21. Kocvara, M.; Outrata, J.V. On implicit complementarity problems with application in mechanics. Proceedings of the the IFIP Conference on Numerical Analysis and Optimization; Rabat, Marocco, 15–17 December 1993.
22. Noor, M.A. General variational inequalities and nonexpansive mappings. J. Math. Anal. Appl.; 2007; 331, pp. 810-822. [DOI: https://dx.doi.org/10.1016/j.jmaa.2006.09.039]
23. Ahmad, R.; Ansari, Q.H.; Irfan, S.S. Generalized variational inclusions and generalized resolvent equations in Banach spaces. Comput. Math. Appl.; 2005; 29, pp. 1825-1835. [DOI: https://dx.doi.org/10.1016/j.camwa.2004.10.044]
24. Ahmad, R.; Ansari, Q.H. Generalized variational inclusions and H-resolvent equations with H-accretive operators. Taiwan. J. Math.; 2007; 111, pp. 703-716. [DOI: https://dx.doi.org/10.11650/twjm/1500404753]
25. Ahmad, R.; Ansari, Q.H. An iterative algorithm for generalized nonlinear variational inclusions. Appl. Math. Lett.; 2000; 13, pp. 23-26. [DOI: https://dx.doi.org/10.1016/S0893-9659(00)00028-8]
26. Fang, Y.P.; Huang, N.J. H-Monotone operator and resolvent operator technique for variational inclusions. Appl. Math. Comput.; 2003; 145, pp. 795-803. [DOI: https://dx.doi.org/10.1016/S0096-3003(03)00275-3]
27. Huang, N.J.; Fang, Y.P. A new class of general variational inclusions involving maximal η-monotone mappings. Publ. Math. Debrecen; 2003; 62, pp. 83-98. [DOI: https://dx.doi.org/10.5486/PMD.2003.2629]
28. Huang, Z.; Noor, M.A. Equivalency of convergence between one-step iteration algorithm and two-step iteration algorithm of variational inclusions for H-monotone mappings. Computers Math. Appl.; 2007; 53, pp. 1567-1571. [DOI: https://dx.doi.org/10.1016/j.camwa.2006.08.044]
29. Noor, M.A.; Huang, Z. Some resolvent iterative methods for variational inclusions and nonexpansive mappings. Appl. Math. Comput.; 2007; 194, pp. 267-275. [DOI: https://dx.doi.org/10.1016/j.amc.2007.04.037]
30. Zeng, L.C.; Guu, S.M.; Yao, J.C. Characterization of H-monotone operators with applications to variational inclusions. Comput. Math. Appl.; 2005; 50, pp. 329-337. [DOI: https://dx.doi.org/10.1016/j.camwa.2005.06.001]
31. Gürsoy, F.; Sahu, D.R.; Ansari, Q.H. S-iteration process for variational inclusions and its rate of convergence. J. Nonlinear Convex Anal.; 2016; 17, pp. 1753-1767.
32. Nagurney, A. Network Economics: A Variational Inequality Approach; Springer: Berlin/Heidelberg, Germany, 1999.
33. Kotsireas, I.S.; Nagurney, A.; Pardalos, P.M. Dynamics of Disasters–Algorithmic Approaches and Applications; Springer Optimization and Its Applications 140 Springer: Berlin/Heidelberg, Germany, 2018.
34. Fargetta, G.; Maugeri, A.; Scrimali, L. A stochastic Nash equilibrium problem for medical supply competition. J. Optim. Theory Appl.; 2022; 193, pp. 354-380. [DOI: https://dx.doi.org/10.1007/s10957-022-02025-y]
35. Qihou, L. A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings. J. Math. Anal. Appl.; 1990; 146, pp. 301-305. [DOI: https://dx.doi.org/10.1016/0022-247X(90)90303-W]
36. Weng, X. Fixed point iteration for local strictly pseudocontractive mapping. Proc. Amer. Math. Soc.; 1991; 113, pp. 727-731. [DOI: https://dx.doi.org/10.1090/S0002-9939-1991-1086345-8]
37. Milovanović, G.V. Numerical Analysis and Approximation Theory—Introduction to Numerical Processes and Solving of Equations; Zavod za udžbenike: Beograd, Serbia, 2014; (In Serbian)
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.