1. Introduction
Throughout this paper, let and be the sets of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product and norm . Let C be a nonempty, closed and convex subset of H. Let be a nonlinear operator. For the variational inequality problem, one has to find some such that
(1)
The set of solutions of variational inequality can be denoted as . Nowadays, the variational inequality problem has aroused more and more attention by many scholars and it is an important branch of the nonlinear problem. Its applications involve different fields, such as engineering sciences, medical image processions and so on.
Through the transformation of (1), we know that the variational inequality problem is equivalent to the fixed point problem. In other words, it can be converted to find a point such that
(2)
where is the metric projection of H to C and is a positive real constant. Its iteration format is that(3)
This method is an example of the so-called gradient projection method. It is well known that if A is -strongly monotone and L-Lipschitz continuous, the variational inequality has a unique solution and the sequence generated by (3) converges strongly, when , to this unique solution. If A is k-inverse strongly monotone and , assuming its solution is nonempty, then the sequence converges weakly to a point of .
In 1976, Korpelevich [1] proposed an algorithm which was known as the extragradient method [1,2]:
(4)
for every , where , A is Lipschitz continuous and monotone. Compared with Equation (3), Equation (4) avoids the hypotheses of the strong monotonicity of the operator A. If , the sequence generated by (4) converges weakly to an element of . In fact, although the extragradient method has weaken the condition of the operator, we need to calculate two projections onto C in each iteration.Meanwhile, the extragradient method is applicable to the case that has a closed form, in other words, has an explicit expression. But in some cases, is not easy to calculate and has some limitations. When C is a closed ball or half space, has an analytical expression, while for a general closed convex set, often does not have analytical expression.
To overcome this difficulty, it has received great attention by many authors who had improved it in various ways. To our knowledge, there are three kinds of methods which are all improvements to the second projection about Equation (4). In all three methods, operator A is Lipschitz continuous and monotone. The first one is the subgradient extragradient method which was proposed by Censor [3] in 2011 and the iterate by the process:
(5)
where , A is Lipschitz continuous and monotone. The key operation of subgradient extragradient method replaces the second projection onto C of the extragradient method by a projection onto a special constructible half-space. Obviously, this reduces the difficulty of the calculation. The second one is the Tseng’s extragradient method by Duong Viet Thong and Dang Van Hieu [4] in 2017:(6)
where , , .In particular, this algorithm does not require to know the Lipschitz constant. The third one is the projection and contraction method which was studied by Q.L. Dong [5] in 2017:
(7)
for each , where , ,As a result, the sequences generated by Equations (5)–(7) both converge weakly to a solution of the variational inequality.
By comparing the above three methods, we find that reducing the condition of the algorithm can also solve the variational inequality problem. However, calculating projections is essential in these methods. So is there a way to avoid the calculation of projections that can solve the variational inequality problem?
As we all know, Yamada [6] introduced the so-called hybrid steepest descent method in 2001:
(8)
which is essentially an algorithmic solution to the variational inequality problem. It does not require calculate but requires a closed form expression of a nonexpansive mapping T. The fixed point set of T is C. So if T is a nonexpansive mapping with and F is k-Lipschitz continuous and -strongly monotone, the sequence generated by (8) converges strongly to the unique solution of .Inspired by this thought, in 2014, Zhou and Wang [7] proposed a new iterative algorithm which based on Yamada’s hybrid steepest descent method and Mann iterative method:
(9)
where and . In (9), are nonexpansive mappings of with and F is an -strongly monotone, L-Lipschitz continuous mapping. Then the sequence generated by (9) converges strongly to the unique point of . In particular, when , (9) can be rewritten as(10)
and we can also get the sequence generated by (10) converges strongly to of .The advantage of the Equations (4)–(7) is that the condition of the algorithms is reduced while the advantage of Yamada algorithm avoids the influence of the projection. So, can we combine the advantages of ideas of these several methods to design a new algorithm? In that way, we cannot help but pose such a question: If we weaken the condition of Equation (10), will we get a different result? This is the main issue we will explore in this paper.
In this paper, motivated and inspired by the above results, we introduce a new iteration algorithm: and
In this iteration algorithm, we weaken the conditions of operators. In other words, we change the strongly monotone of F into inverse strongly monotone. Then the weak convergence of our algorithmic will be proved. It is worth emphasizing that the advantage of our algorithm is that it does not require projection and the condition of the operator is also properly weakened.
Finally, the outline of this paper is as follows. In Section 2, we list some useful basic definitions and lemmas which will be used in this paper. In Section 3, we prove the weak convergence theorems of our main algorithm. In Section 4, through the above-mentioned conclusion, we get some new weak convergence theorems to the equilibrium problem and the split feasibility problem and so on. In Section 6, we give a concrete example and the numerical result to verify the correctness of our conclusions.
2. Preliminaries
In what follows, H denotes a real Hilbert space with the inner product and norm . And C denotes a nonempty, closed and convex subset of H. We use the sign to denote that the sequence converges strongly to a point x, i.e., and use to denote that the sequence converges weakly to a point x, i.e., . If there exists a subsequence of converging weakly to a point z, then z is called a weak cluster point of . We use to denote the set of all weak cluster points of .
([8]) A mapping is called nonexpansive if
The set of fixed points of T is the set
As we all know, if T is nonexpansive, assume , the is closed convex.
A mapping is called
-
(i) L-Lipschitz, where , iff
-
(ii) monotone iff
-
(iii) strongly monotone iff
where . In this case, F is said to be η-strongly monotone; -
(iv) inverse strongly monotone iff
where . In this case, F is said to be k-inverse strongly monotone.
As we all know, if F is k-inverse strongly monotone, F is also -Lipschitz continuous.
([9]) A mapping is said to be an averaged mapping, if and only if it can be written as the convex combination of the identifier I and a nonexpansive mapping, that is to say
where and is a nonexpansive mapping. To be more precise, we also say that T is α-averaged.
Let H be a real Hilbert space, then the following relationships are established:
;
;
.
([9]) Let H be a real Hilbert space and C is nonempty bounded closed convex of H. Let T is a nonexpansive mapping of C to C, then .
([9]) Let H be a real Hilbert space, then:
-
T is nonexpansive if and only if the complement is -inverse strongly monotone;
-
if T is ν-inverse strongly monotone, then for , is -inverse strongly monotone;
-
T is averaged if and only if the complement is ν-inverse strongly monotone for some ; indeed, for , T is α-averaged if and only if the complement is -inverse strongly monotone.
(Demiclosedness Principle [10]): Let C be a closed and convex subset of a real Hilbert space H. Let be a nonexpansive mapping with . If the sequence converges weakly to x and converges strongly to y, then .
In particular, if and , then . In other words, .
([9]) Let C be a nonempty closed and convex subset of a real Hilbert space H. Let be a sequence in H such that the following two properties hold:
-
exists for each ;
-
.
-
Then the sequence is converges weakly to a point in C.
([11]) Let C be a nonempty closed and convex subset and be a sequence of a real Hilbert space H. Suppose
for every . Then, the sequence converges strongly to a point in C.
3. Main Results
In this section, we will give our main result of this paper. Based on the iterative format (10), we weaken the condition of the operator to present our new algorithm. By proving this, we find that the sequence generated by the new algorithm is weakly convergent and the result of convergence is the same. In other words, it converges weakly to a solution for finding an element of intersection of the set of zero points of inverse strongly monotone mapping and the set of fixed points of a nonexpansive mapping in a real Hilbert space.
Let H be a real Hilbert space and let be a nonexpansive mapping with . Let F be a k-inverse strongly monotone mapping of . Assume that . Let the sequences and are generated by and
(11)
where , satisfy the following conditions:-
;
-
.
-
Then the sequence generated by (11) converges weakly to a point , where . At the same time, is also a solution of .
Let , we have
(12)
We also deduce
(13)
Combine (12) and (13), we have
Therefore, there exists such that
and the sequence is bounded. Meanwhile, we also obtain that the sequence is bounded.Below, we divide the problem into two steps to prove that .
Firstly, let we show that .
So, we obtain that
By (12), we have
Hence,
Since F is k-inverse strongly monotone, we can rewrite in the following format:
(14)
where and is a nonexpansive mapping of H to H for each .So
Obviously, we can obtain that
By (14), we get
so
Hence
So, by Lemma 4, we get .
Secondly, let us show that .
In the following, we firstly prove that is a nonexpansive mapping.
So, is a nonexpansive mapping.
Since , it is .
Because is bounded, we can find a subsequence which converges to and . And for each , assume that the subsequence of converges weakly to . We have
Let us assume
Hence, we can obtain
From Lemma 4, we get the conclusion that
According to all of the above, we have
Consequently, from Lemma 5, we get
Hence, by Lemma 6, we obtain that
On the other hand, if , we can write it as .
With regard to , it is expressed as
Obviously, when , the above formula is established.
Therefore,
so is also a point of .This completes the proof.
□
4. Application
In this section, we will illustrate the practical value of our algorithm and give some applications, which are useful in nonlinear analysis and optimization.
In the following, we mainly discuss the equilibrium problem and the split feasibility problem by applying the idea of Theorem 1 to obtain weak convergence theorems in a real Hilbert space.
First of all, let us understand the equilibrium problem.
Let C be a nonempty closed convex subset of a real Hilbert space H and let be a bifunction. Then, we consider the equilibrium problem (see [12,13,14,15]) which is to find such that
(15)
We denote the set all of by , i.e.,
Assume that the bifunction f satisfies the following conditions:
for all ;
for all , i.e., f is monotone;
for all ;
for each , is convex and lower semicontinuous.
If f satisfies the above conditions , let and , then there exists , such that [16]
([15]) Assume that satisfies the conditions of , define a mapping which we also call the resolvent of f for and as follows:
Then the following holds:
-
is single-valued;
-
is a firmly nonexpansive mapping, i.e., for all
-
;
-
is closed and convex.
From Lemma 7, we know that under certain conditions, solving the equilibrium problem can be transformed into solving the fixed point problem. Combined with the idea of Theorem 1, we can get the following result.
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let be a bifunction which satisfies the conditions . Let F be a k-inverse strongly monotone of H to H. Assume . Let the sequences and are generated by and
(16)
where , satisfy the following conditions:-
;
-
;
-
r is a positive real number.
-
Then the sequence converges weakly to a point , where . At the same time, is a solution of .
Let , combining Theorem 1 and Lemma 7, the result is proven. □
Next, we are looking at the split feasibility problem.
In 1994, the split feasibility problem was introduced by Censor and Elfving [17]. The split feasibility problem is as follows:
Find , such that and ,
(see [17,18,19,20,21,22,23]) where C and Q are nonempty closed convex subset of real Hilbert spaces and , respectively, and is a bounded linear operator. We usually abbreviate the split feasibility problem as SFP.
In 2002, the so-called CQ algorithm was first introduced by Byrne [21]. Define as the following:
(17)
where and and are the metric projections.From (8), we can see that the CQ algorithm needs to calculate two projections. So, can we use the idea of the Yamada iteration to improve the algorithm? We consider that C is the set of a fixed point of a nonexpansive mapping T, so we come to the following conclusion.
Before solving this problem, we give a lemma.
Let and be real Hilbert spaces, let be a bounded linear operator and be the adjoint of A, let C be a nonempty closed convex subset, and let G be a firmly nonexpansive mapping of to . Then is a -inverse strongly monotone operator, i.e.,
Since G is a firmly nonexpansive mapping, then
Let ,
Hence,
So that is -inverse strongly monotone. □
Below we present the related theorem for the split feasibility problem.
Let and be real Hilbert spaces, let be a nonexpansive mapping with , let be a bounded linear operator and be the adjoint of A. Assume that the solution of the split feasibility problem is nonempty. Let the sequences and are generated by and
(18)
where , satisfy the following conditions:-
;
-
.
Then the sequence converges weakly to a point which is the solution of .
We notice that is nonexpansive mapping, according to Lemma 8, we know that is -inverse strongly monotone.
Put in Theorem 1, the conclusion is obtained. □
5. Numerical Result
In this section, we give a concrete example of solving a solution of a system of linear equations to judge the validity of our algorithm by comparing with the Equation (4.7) of Theorem 4.5 in [24].
In the following, we give a system of linear equations that are verified by the iterative algorithm in Theorem 3.
Let us solve the linear equation .
Assume that . In the following, we take
and given the parameters that and .Consider , where
We clearly know that the SFP can be formulated as the problem of finding a point such that
where , .In other words, is the solution of system of the linear equation , and
Then by Theorem 3, the sequence is generated by
When ,From the following Table 1 and Table 2, we can easily observe that as the iterative number increases, gets closer and closer to the exact solution and the errors gradually approach zero.
From the above verification, we can know that our algorithm is effective, and the algorithm of Theorem 3 is better than the algorithm of Theorem 4.5 [24].
6. Conclusions
The variational inequality problem is an important branch of mathematics research and plays an important role in nonlinear analysis and optimization problems. Nowadays, there are many different ways to solve the variational inequality problem. The main methods are projection methods and Yamada methods. However, there are limitations in both methods. The projection in the projection methods is not easy to calculate in some cases, and the condition of the operator of the Yamada method is too strong. However, they each have their own advantages. Based on the Yamada algorithm, Zhou and Wang proposed a new iterative algorithm and obtained a strong convergence conclusion. In this paper, we consider how to avoid using projection and weaken the condition of the algorithm. In other words, our algorithm does not require projection and the condition of the operator is inverse-strongly monotone. Then we obtain the result of weak convergence. Finally, we apply this algorithm to the equilibrium problem and the split feasibility problem and prove their effectiveness.
Author Contributions
All authors contributed equally in writing this article. All authors read and approved the manuscript.
Funding
This work was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing.
Conflicts of Interest
The author declare that they have no competing interest.
Tables
Table 1Numerical results as regards Example.
n | Algorithm in Theorem 3 | |||||
---|---|---|---|---|---|---|
0 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.5675 |
10 | 0.1828 | 0.1858 | 0.2456 | 0.4440 | 0.8835 | 0.1868 |
20 | 0.0728 | 0.1268 | 0.2404 | 0.4664 | 0.9260 | 0.0825 |
50 | 0.0640 | 0.1256 | 0.2485 | 0.4935 | 0.9854 | 0.0161 |
100 | 0.0629 | 0.1254 | 0.2504 | 0.5003 | 1.0003 | 7.8067 × 10 |
Numerical results as regards Theorem 4.5 [24].
n | Algorithm in Theorem 4.5 [24] | |||||
---|---|---|---|---|---|---|
0 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.5675 |
10 | 0.1776 | 0.1704 | 0.2441 | 0.4433 | 0.8775 | 0.1832 |
20 | 0.0817 | 0.1340 | 0.2560 | 0.4841 | 0.9509 | 0.0562 |
50 | 0.0641 | 0.1263 | 0.2512 | 0.4997 | 0.9980 | 0.0031 |
100 | 0.0630 | 0.1257 | 0.2509 | 0.5008 | 1.0014 | 0.0020 |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019 by the authors.
Abstract
In this paper, based on the Yamada iteration, we propose an iteration algorithm to find a common element of the set of fixed points of a nonexpansive mapping and the set of zeros of an inverse strongly-monotone mapping. We obtain a weak convergence theorem in Hilbert space. In particular, the set of zero points of an inverse strongly-monotone mapping can be transformed into the solution set of the variational inequality problem. Further, based on this result, we also obtain some new weak convergence theorems which are used to solve the equilibrium problem and the split feasibility problem.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 College of Science, Civil Aviation University of China, Tianjin 300300, China; Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China
2 College of Science, Civil Aviation University of China, Tianjin 300300, China