1. Introduction
For a given real separable Hilbert space , we write to indicate an isonormal Gaussian process defined on a probability space . Let be a sequence of random variables of functionals of Gaussian fields associated with X. The authors in [1] discovered a central limit theorem (CLT), known as the fourth moment theorem, for a sequence of random variables belonging to a fixed Wiener chaos.
[Fourth moment theorem] Let be a sequence of random variables belonging to the th Wiener chaos with for all . Then, if and only if , where Z is a standard normal random variable and the notation means a convergence in distribution.
Such a result provides a remarkable simplification of the method of moments or cumulants. In [2], the fourth moment theorem is expressed in terms of the Malliavin derivative. However, the results given in [1,2] do not provide any estimates, whereas the authors in [3] find an upper bound for various distances by combining Malliavin calculus (see, e.g., [4,5,6]) and Stein’s method for normal approximation (see, e.g., [7,8,9]). Moreover, the authors in [10,11] obtain optimal Berry–Esseen bounds as a further refinement of the main results proven in [3] (see, e.g., [12] for a short survey).
For the fourth moment theorem, the key step for the proof of this theorem is to show the following inequality:
(1)
where is the Malliavin derivative of F and is the pseudo-inverse of the Ornstein–Uhlenbeck generator (see Section 2). In the particular case where , , with , the bound in (1) is given by(2)
where stands for the Kolmogorov distance.Another research of this line can be found: [13] for multiple Winger integrals in a fixed order of free Winger chaos, and [14,15,16] for multi-dimensional vectors of multiple stochastic integrals, such that each integral belongs to a fixed order of Wiener chaos. In particular, the new techniques for the proof of the fourth moment theorem are also found in [17,18,19]. In [19], the authors prove this theorem by using the asymptotic independence between blocks of multiple stochastic integrals. At this point, it is important to mention that all of these approaches deal with only the random variables in a fixed chaos, and thus do not cover the cases that are not part of some chaoses. For this reason, we are interested in the conditions that the property of (2) holds for the generalized random variables that are not in a fixed Wiener chaos.
In this paper, we will develop a method for finding a bound on the multivariate normal approximation of a random vector F for which the fourth moment theorem holds even when F is a d-dimensional random vector whose components are general functionals of Gaussian fields. By applying this method to a random vector whose components belong to some Wiener chaos, we derive the fourth moment theorem with an upper bound more sharply than the previous one given in Theorem 4.3 of [19].
Differently from the fourth moment theorem for functionals of Gaussian fields studied so far, the findings of our research represent a further extension and refinement of the fourth moment theorem, in the sense that (i) they do not require the involved random vector whose components belong to some Wiener chaos, and (ii) the constant part except for the fourth cumulant may be significantly improved. The main aim in this paper is to discover under what conditions the fourth moment bound holds for vector-valued general functionals of Gaussian fields, where each of which needs not to belong to some Wiener chaos. In the case of vector-valued multiple integrals, the conditions on the fourth moment theorem are quite naturally satisfied.
On the other hand, in the case of , the application of the method developed here shows that, even in case of general functionals of Gaussian fields, the fourth moment theorem holds without any conditions needed for the case of . The only necessary condition is that the fourth cumulant is non-zero. The result in the one-dimensional case is different from the result obtained by substituting into the multi-dimensional case. For these reasons, we will see how the random vector case can be reformulated in the one-dimensional case.
Our paper is organized in the following way. Section 2 contains some basic notion on Malliavin calculus. Section 3 is devoted to developing a method for obtaining the fourth moment bound for a -valued random vector whose components are functionals of Gaussian fields. In Section 4, we will show the fourth moment theorem by applying the new method developed in Section 3 to vector-valued multiple stochastic integrals. In Section 5, we will describe how the random vector case can be reconstructed in the one-dimensional case.
2. Preliminaries
In this section, we describe some basic facts on Malliavin calculus for Gaussian processes. For a more detailed explanation on this subject, see [4,5]. Fix a real separable Hilbert space with an inner product denoted by . Let be an isonormal Gaussian process that is a centered Gaussian family of random variables, such that . If is the qth Hermite polynomial, then the closed linear subspace, denoted by of generated by is called the qth Wiener chaos of B.
We define a linear isometric mapping by , where is the symmetric qth tensor product. It is well known that any square integrable random variable , where denotes the -field generated by B, admits a series expansion of multiple stochastic integrals:
where the series converges in and the functions and are uniquely determined with .Let be a complete orthonormal system of the Hilbert space . For and , the contraction of f and g, , is the element of defined by
(3)
The product formula for the multiple stochastic integrals is given below.If and , then
(4)
We denoted by the class of smooth and cylindrical random variables F of the form
(5)
where and , . For these random variables, the Malliavin derivative of F with respect to B is the element of defined as(6)
Let be the closure of its associated smooth random variable class with respect to the norm Let be the adjoint of the Malliavin derivative D. The domain of , denoted by , is composed of those elements such that there exists a constant C satisfying If , then is an element of defined as the following duality formula, called an integration by parts,Recall that any square integrable random variable F can be expanded as , where , , is the projection of F onto . We say that this random variable belongs to if . For such a random variable F, we define an operator , which coincides with the infinitesimal generator of the Ornstein–Uhlhenbeck semigroup. Then, if and only if and , and, in this case, . We also define the operator , called the pseudo-inverse of L, as . Then, is an operator with values in , and for all .
3. Main Results
In this section, we will find a sufficient condition on the fourth moment bound for a vector-valued random variable whose components are functionals of Gaussian fields. It is important to note that these functionals of Gaussian fields do not necessarily belong to some Wiener chaos. The next lemma will play a fundamental role in this paper.
Suppse that and . Then, we have that and
A multi-index is a vector of a non-negative integer of the form . Then, we write
where . By convention, we set .For the rest of this section, we fix a random vector , .
Assume that for some . The joint cumulant of order of F is defined by
where is the characteristic function of F.
Suppose that for each . Let be a sequence taking values in , where is the multi-index of length d given by
If , then . Suppose that is a well-defined random variable of . We define For the multivariate Gamma operator , see Definition 4.2 in [14]. For simplicity, we will frequently write and instead of and , respectively.Using the Gamma operators of F, we can state a formula for the cumulants of any random vector F (see, e.g., [14,20]).
(Noreddine and Nourdin). Let be a d-dimensional multi-index with the unique decomposition . If for , then
(7)
where the sum is taken over all permutations σ of the set .Obviously, the above lemma can be expressed in the one-dimensional case as follows: Let be an integer, and suppose that . Then
(8)
Successive applications of Lemma 1 yield that
(9)
Equation (9) gives that(10)
For the forthcoming theorem, first we define a set:
Let , , with and for , and Z be a centered normal random vector with the covariance , where . Suppose that, for ,
Assume that Σ is invertible. We have that, for any Lipschitz function ,
(11)
or, as another expression,(12)
where and denote the operator norm of a matrix and the euclidean norm in , respectively, and
Recall that, for a Lipschitz function , Theorem 6.1.1 in [4] shows that
(13)
Since for , the right-hand side of (13) can be expressed as By the definition of the operator , we have that, for ,(14)
For , we write, using Lemma 1 and the definition of , the first term in (14) as follows: It is obvious that(15)
The above Equation (15) gives(16)
Also using Lemma 1 and the definition of , the terms and can be expressed as(17)
(18)
Combining (16)–(18), we obtain, together with (14), that(19)
Now, we choose a, b, and c such that and Obviously, we may take , , and . The assumptions and yield that the left-hand side of (19) can be bounded by(20)
Therefore the Inequality (20) and the assumption prove that, if ,(21)
Applying (10) in Remark 2 (or Lemma 2) to the right-hand side of (21), we have, together with the assumptions and , that(22)
The Inequality (22) proves the desired conclusion (11). Since , the identity holds, which gives another expression (12). Hence, the proof of this theorem is completed. □Our techniques do not require the components of a random vector to belong to a fixed Wiener chaos. Since the assumptions , , and are satisfied in the case of a random vector whose entries are element of some Wiener chaos, our result is an extension of Theorem 4.3 in [19]. This fact makes it possible to estimate how restrictive the assumptions given in Theorem 2 are in practice. In addition, for this random vector, the constant of the estimate in Theorem 4.3 in [19] corresponds to in (12).
4. Vector-Valued Multiple Stochastic Integrals
In this section, we consider a special case of the previous result such that F is a vector-valued multiple stochastic integral. First, for an explicit expression of , we introduce the combinatorial constants
recursively defined by the relation and for any ,For an explicit expression of , we use the notations
andFix . Let , , be positive integers, and let F be a random vector
where for . Let Z be a centered multivariate normal random variable with the covariance , where . For any Lipschitz function , it holds that
(23)
or(24)
where a constant is given byMoreover, if , then is given by
(25)
It is sufficient to prove that F satisfies the assumptions , , and in Theorem 2.
For the condition : By the definition of , we have that
which yields(26)
On the other hand,(27)
Denote by the length of a vector . To prove , we need to show that, for every , the inner products in (26) For this, it is sufficient, from the symmetry of , , and symmetrization of contractions, to show that, for every ,(28)
where and . Since , the integral in (28) can be expressed as(29)
Using (26) and (27) together with (29) yields that, for , For the condition : Obviously,(30)
The expectation of (30) gives(31)
For , the expectation (31) can be written as(32)
Since for , we deduce, from (32), that On the other hand, if , then(33)
For the condition : First, write(34)
Next, we compute the three expectations in (34). By the definition of the operator , we obtain
(35)
and(36)
When and , we have that . Hence, . Taking an expectation on (35) and (36) yields that(37)
and(38)
where Using the definition of coefficients and , we compute(39)
where If or , then we have, from a similar estimate as for (29), that, for and , Indeed, for , it is sufficient to show that(40)
where . Similarly, we can show that, for or , These facts lead us to and for , or , which implies that . Now, we find a constant such that . Let us set . From (37) and (38), we have, together with (39), that(41)
where and For every and , we have This leads us to(42)
where For the second sum in (41), we change the range of from the inequality to where is a positive integer. For fixed ,(43)
If for , then, from (43), we have(44)
for every and . For , we deduce, from (43), for fixed , that(45)
For and , the Inequality (43) yields(46)
On the other hand, if and , then we obtain, from (43), that(47)
Combining the above results (44)–(47), we obtain(48)
where Similarly,(49)
where The Inequalities (42), (48), and (49) yield so that the condition () is satisfied. Hence, applying Theorem 2 gives the desired conclusion. If , the estimate in (42) yields a constant given in (25). □1. Theorem 3 proves that the three assumptions in Theorem 2 are satisfied under the same conditions as in Theorem 4.3 of [19]. To achieve this, we just need to explicitly compute the expected values of Gamma operators and compare them.
2. The estimate in Theorem 4.3 of [19] corresponds to the estimate (24) with . Hence, our approach improves the rate of constants appearing in the previous estimate given in [19]. If , then , which implies that F has the same distribution with Z.
5. Results in Dimension One ()
In this section, we specialize the results given in the previous Section 3 and Section 4 to the one-dimensional case. We begin with a one-dimensional version of Gamma operators and (for these operators, see [21,22]). We set and . If F is a well-defined element in , we set and for .
If , the conditions , , and are satisfied under the assumption .
The assumptions and obviously hold. Indeed, the Cauchy–Schwartz inequality proves that
where . A repeated application of Lemma 1 proves that This shows that . Let . Then, . Since , there exists a constant such that . This implies that the condition is satisfied. □If , it follows from (8) that
(50)
Studies so far have shown that Inequality (1) holds true only when F belongs to a fixed Wiener chaos. However, the technique developed here can be applied to prove that the fourth moment theorem (1) holds even if F is not an element of a fixed Wiener chaos. The proof in Theorem 4 yields, together with (50), that(51)
where a constant satisfies . Note that the constant given in (12) is three times that in (51).Let ϕ be a linear function in the proof of Theorem 4. Let with (). Then, there exists a constant such that , and .
A direct computation yields that
(52)
On the other hand, Theorem 5.1 in [22] shows that(53)
Combining (52) and (53) (or for in (41) in the proof of Theorem 3) together with , we obtain that(54)
This Inequality (54) shows that . Since and , it may be possible for to belong to . □Substituting for in (51), we can derive the fourth moment theorem in (2). By using the new method developed in this paper, we show that the constant term given in (51) is less than or equal to the one in (2). This means that
(55)
Let’s take an example that satisfies (55).
We consider the case of . Let with . A similar computation as for () proves that
(56)
From (56), it follows that andAs a consequence of (51), the upper bound is given by
(57)
On the other hand, the estimate (2) () gives(58)
Compare the constant in (57) with that in (58).6. Conclusions and Future Works
This paper finds a method to obtain the fourth moment bound on the normal approximation of F, where F is a d-dimensional random vector whose components are general functionals of Gaussian fields. In order to prove the fourth moment theorem, all we need to do is to show that the conditions , , and in Theorem 2 are satisfied. The significant feature of our works is that these conditions are naturally satisfied in the specific case where F is a random variable belonging to the vector-valued multiple integrals. In addition, our technique yields a much better estimate than the conventional method. Comparing with the studies in literatures [3,14,15,16,19,20], our study is not only an extension of these studies, but it is also possible to naturally derive the results of existing studies.
As future research directions, we will apply our approach for the fourth moment theorem, developed here, to more general processes, including Markov diffusion processes and Poisson processes. Our developed approach is expected to integrate the fourth moment theorem for many processes.
Conceptualization, Y.-T.K. and H.-S.P.; methodology, Y.-T.K.; writing and original draft preparation, Y.-T.K. and H.-S.P.; co-review and validation, H.-S.P.; writing—editing and funding acquisition. All authors have read and agreed to the published version of the manuscript.
This research was supported by Hallym University Research Fund 2021 (HRF-202112-005).
Not applicable.
Not applicable.
Not applicable.
We are very grateful to the anonymous Referees for their suggestions and valuable advice.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Nualart, D.; Peccati, G. Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab.; 2005; 33, pp. 177-193. [DOI: https://dx.doi.org/10.1214/009117904000000621]
2. Nualart, D.; Ortiz-Latorre, S. Central limit theorems for multiple stochastic integrals and Malliavin calculus. Ann. Probab.; 2008; 33, pp. 177-193. [DOI: https://dx.doi.org/10.1016/j.spa.2007.05.004]
3. Nourdin, I.; Peccati, G. Stein’s method on Wiener chaos. Probab. Theory Related Fields; 2009; 145, pp. 75-118. [DOI: https://dx.doi.org/10.1007/s00440-008-0162-x]
4. Nourdin, I.; Peccati, G. Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality; Cambridge Tracts in Mathematica Cambridge University Press: Cambridge, MA, USA, 2012; Volume 192.
5. Nualart, D. Malliavin Calculus and Related Topics; 2nd ed. Probability and Its Applications Springer: Berlin, Germany, 2006.
6. Nualart, D. Malliavin Calculus and Its Applications; Regional Conference Series in Mathematics Number 110; American Mathematical Society: Providence, RI, USA, 2008.
7. Stein, C. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probabiltiy; University of California Press: Berkeley, CA, USA, 1972; Volume II, pp. 583-602.
8. Stein, C. Approximate Computation of Expectations; IMS: Hayward, CA, USA, 1986.
9. Chen, L.H.Y.; Goldstein, L.; Shao, Q.-M. Normal Approximation by Stein’s Method; Springer: Heidelberg/Berin, Germany, 2011.
10. Nourdin, I.; Peccati, G. Stein’s method and exact Berry-Esseen asymptotics for functionals of Gaussian fields. Ann. Probab.; 2009; 37, pp. 2231-2261. [DOI: https://dx.doi.org/10.1214/09-AOP461]
11. Nourdin, I.; Peccati, G. The optimal fourth moment theorem. Proc. Am. Math. Soc.; 2015; 143, pp. 3123-3133. [DOI: https://dx.doi.org/10.1090/S0002-9939-2015-12417-3]
12. Nourdin, I.; Peccati, G. Stein’s method meets Malliavin calculus: A short survey with new estimates. Recent Development in Stochastic Dynamics and Stochasdtic Analysis; World Sci. Publ.: Hackensack, NJ, USA, 2010; Volume 8, pp. 207-236.
13. Kemp, T.; Nourdin, I.; Peccati, G.; Speicher, R. Winger chaos and the fourth moment. Ann. Probab.; 2012; 40, pp. 1577-1635. [DOI: https://dx.doi.org/10.1214/11-AOP657]
14. Noreddine, S.; Nourdin, I. On the Gaussian approximation of vector-valued multiple integrals. J. Multi. Anal.; 2011; 102, pp. 1008-1017. [DOI: https://dx.doi.org/10.1016/j.jmva.2011.02.001]
15. Nourdin, I.; Peccati, G.; Réveillac, A. Multivariate normal approximation using Stein’s method and Malliavin calcululus. Ann. L’Institut Henri-PoincarÉ-Probab. Atstistiques; 2010; 46, pp. 45-58.
16. Peccati, G.; Tudor, C. Gaussian limits for vector-valued multiple stochastic integrals. Séminaire de Probabilités XXXVIII; Springer: Berlin, Germany, 2005; Volume 1857, pp. 247-262.
17. Azmoodeh, E.; Campese, S.; Poly, G. Fourth moment theorems for Markov diffusion generators. J. Funct. Anal.; 2014; 9, pp. 473-500. [DOI: https://dx.doi.org/10.1016/j.jfa.2013.10.014]
18. Ledoux, M. Chaos of a Markov operator and the fourth moment theorem. Ann. Probab.; 2012; 40, pp. 2439-2459. [DOI: https://dx.doi.org/10.1214/11-AOP685]
19. Nourdin, I.; Rosinski, J. Asymptotic independence of multiple Wiener-Itô integrals and the resulting limit laws. Ann. Probab.; 2014; 42, pp. 497-526. [DOI: https://dx.doi.org/10.1214/12-AOP826]
20. Campese, S. Optimal convergence rates and one-term Edgeworth expansions for multidimensional functionals of Gaussian fields. ALeA Lat. Am. J. Probab. Math. Stat.; 2013; 10, pp. 881-919.
21. Kim, Y.T.; Park, H.S. An Edeworth expansion for functionals of Gaussian fields and its applications. Stoch. Proc. Their Appl.; 2018; 44, pp. 312-320.
22. Nourdin, I.; Peccati, G. Cumulants on the Wiener space. J. Funct. Anal.; 2010; 258, pp. 3775-3791. [DOI: https://dx.doi.org/10.1016/j.jfa.2009.10.024]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
We develop a technique for obtaining the fourth moment bound on the normal approximation of F, where F is an
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer