(ProQuest: ... denotes non-US-ASCII text omitted.)
Jinyu Li 1 and Wei Liang 2 and Shuyuan He 3
Academic Editor:Charalampos Tsitouras
1, School of Sciences, China University of Mining and Technology, Xuzhou 221116, China
2, School of Mathematical Sciences, Xiamen University, Xiamen 361005, China
3, School of Mathematical Sciences, Capital Normal University, Beijing 100048, China
Received 31 March 2014; Accepted 24 August 2014; 2 September 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Consider the stationary ARMA(p,q) time series {yt } generated by [figure omitted; refer to PDF] where the innovation process {[straight epsilon]t } is a sequence of i.i.d. random variables. When E([straight epsilon]t2 )=∞ , model (1) is an infinite variance autoregressive moving average (IVARMA) model, which defines a heavy-tailed process {yt } . For model (1), statistical inference has been explored in many studies (see, e.g., [1, 2]). Recently, for example, Pan et al. [3] and Zhu and Ling [4] proposed a weighted least absolute deviations estimator (WLADE) for model (1) and obtained the asymptotic normality.
However, in the building of ARMA models, we are usually only interested in statistical inference for partial parameters. For example, in the sparse coefficient (a part of zero coefficients) ARMA models, it is necessary to determine which coefficient is zero. For model (1), one traditional method is to construct confidence regions for the partial parameters of interest by normal approximation as in [3]. However, since the limit distribution depends on the unknown nuisance parameters and density function of the errors, estimating the asymptotic variance is not a trivial task. Based on these, this paper tries to put forward a new method for the estimation of partial parameters of ARMA models. We propose an empirical likelihood method, which was introduced by Owen [5, 6]. Based on the estimating equations of WLADE, a smoothed profile empirical likelihood ratio statistic is derived, and a nonparametric version of Wilks's theorem is proved. Therefore, we can construct confidence regions for the partial parameters of interest. Also, simulations suggest that, for relative small sample cases, the empirical likelihood confidence regions are more accurate than those confidence regions constructed by the normal approximation based on the WLADE proposed by Pan et al. [3].
As an effective nonparametric inference method, the empirical likelihood method produces confidence regions whose shape and orientation are determined entirely by the data and therefore avoids secondary estimation. In the past two decades, the empirical likelihood method has been extended to many applications [7]. There are also many studies of empirical likelihood method for autoregressive models. Monti [8] considered the empirical likelihood in the frequency domain; Chuang and Chan [9] developed the empirical likelihood for unstable autoregressive models with innovations being a martingale difference sequence with finite variance; Chan et al. [10] applied the empirical likelihood to near unit root AR(1) model with infinite variance errors; Li et al. [11, 12], respectively, used the empirical likelihood to infinite variance AR(p ) models and model (1).
The rest of the paper is organized as follows. In Section 2, we propose the profile empirical likelihood for the parameters of interest and show the main result. Section 3 provides the proofs of the main results. Some simulations are conducted in Section 4 to illustrate our approach. Conclusions are given in Section 5.
2. Methodology and Main Results
First, the parameter space is denoted by Θ⊂Rp+q , which contains the true value θ0 of the parameter θ as an inner point. For θ=([straight phi]1 ,...,[straight phi]p ,[vartheta]1 ,...,[vartheta]q ) , put [figure omitted; refer to PDF] where yt ...1;0 for all t...4;0 , and note that [straight epsilon]t (θ0 )...0;[straight epsilon]t , because of this truncation.
We define the objective function as [figure omitted; refer to PDF] where u...5;max...(p,q) and the weight function w~t =1/(1+∑k=1t-1 ...k-α |yt-k |)4 , depending on a constant α>2 . The WLADE, denoted by θ^ , is a lacol minimizer of Sn (θ) in a neighborhood of θ0 [3]. Denote At (θ)=(At,1 (θ),...,At,p+q (θ))τ , where At,i (θ)=-∂[straight epsilon]t (θ)/∂θi . By (8.11.9) of Brockwell and Davis [13], it holds for t>max...(p,q) that [figure omitted; refer to PDF] Hence, θ^ satisfies estimating equation [figure omitted; refer to PDF] where sgn...(x)=-1 for x<0 and =1 for x...5;0 (see [14]). Note that the above estimating equation is not differentiable at point θ such that [straight epsilon]t (θ)=0 for some t . This causes some problems for our subsequent asymptotic analysis. To overcome this problem, we replace it with a smooth function. Define a probability density kernel K(·) [15] such that ∫-∞+∞ ...xj K(x)dx=0,κ for j=1,2 , respectively, where κ...0;0 . Let Gh (x)=∫-x/hx/h ...K(u)du for h>0 . Then, a smoothed version of (5) is [figure omitted; refer to PDF]
Let mth (θ)=w~tAt (θ)Gh ([straight epsilon]t (θ)) ; a smoothed empirical log-likelihood ratio is defined as [figure omitted; refer to PDF] Using the Lagrange multiplier, the optimal value of pt is derived to be [figure omitted; refer to PDF] where λ(θ) is a p+q -dimensional vector of Lagrange multipliers satisfying [figure omitted; refer to PDF] This gives the smoothed empirical log-likelihood ratio statistic [figure omitted; refer to PDF]
Let θ=([varphi]τ ,ωτ)τ , where ω∈Rm (1...4;m...4;p+q ) is the parameter of interest and [varphi]∈Rp+q-m is the nuisance parameter. Note that m=p+q means no nuisance parameters. Let [varphi]0 and ω0 denote the true values of [varphi] and ω , respectively. The profile empirical likelihood is defined as [figure omitted; refer to PDF] That is, lp (ω)=lh ([varphi]~(ω),ω) , where [varphi]~=[varphi]~(ω)[: =]argmin...[varphi]lh ([varphi],ω) .
The following conditions are in order.
(A1) The characteristic polynomial [varphi](z)=1-[straight phi]1 z-...-[straight phi]pzp and θ(z)=1+[vartheta]1 z+...+[vartheta]qzq have no common zeros, and all roots of [varphi](z) and θ(z) are outside the unit circle.
(A2) The innovation {[straight epsilon]t } has zero median and a differentiable density f(x) satisfying the conditions f(0)>0 , sup...x∈R |f(x)|<B1 <∞ , and sup...x∈R |f[variant prime] (x)|<B2 <∞ . Furthermore, E|[straight epsilon]t|δ <∞ for some δ>0 , and α>max...{2,2/δ} .
(A3) As n[arrow right]∞ , u[arrow right]∞ and u/n[arrow right]0 .
(A4) The second derivative of K exists in R and K[variant prime] (x) and K[variant prime][variant prime] (x) are bounded.
(A5) h=1/nγ with 1/4<γ<1/3 .
First we show the existence and consistency of [varphi]~(ω0 ) .
Proposition 1.
Let dn =1/nβ with max...{1/3,3γ/2}<β<1/2 . Assume (A1)-(A5) hold; then as n[arrow right]∞ , with probability 1, there exists a local minimizer [varphi]~ of lh ([varphi],ω0 ) which lies in the interior of the ball B={[varphi]:||[varphi]-[varphi]0 ||...4;dn } . Moreover [varphi]~ and λ~=λ([varphi]~,ω0 ) satisfy [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
The following theorem presents the asymptotic distribution of the profile empirical likelihood.
Theorem 2.
Under conditions of Proposition 1, as n[arrow right]∞ , the random variable lp (ω0 ) , with [varphi]~ given in Proposition 1, converges in distribution to χm2 .
If c is chosen such that P(χm2 ...4;c)=a , then Theorem 2 implies that the asymptotic coverage probability of empirical likelihood confidence region Ihc =(ω:lp (ω)...4;c) will be a ; that is, P(ω0 ∈Ihc )=P(lp (ω0 )...4;c)=a+o(1) , as n[arrow right]∞ .
3. Proofs of the Main Results
In the following, ||·|| denotes the Euclidian norm for a vector or matrix and C denotes a positive constant which may be different at different places. For t=0,±1,±2,... , define [figure omitted; refer to PDF] Put Qt =(Ut-1 ,...,Ut-p ,Vt-1 ,...,Vt-q)τ , wt =1/(1+∑k=1∞ ...k-α |yt-k |)4 , and the corresponding partial vector for [varphi]0 is denoted by Q1t . Let [figure omitted; refer to PDF] Assumptions A1 and A2 imply that, for δ~=min...(δ,1) , [figure omitted; refer to PDF] Hence, ∑k=1∞ ...k-α/2 |yt-k |<∞ with probability 1 , which ensures that wt is well defined. Note that ||Qt ||...4;C∑j=1∞ ...rj |yt-j | for some 0<r<1 and [figure omitted; refer to PDF] Then, Σ , Σ1 , and Ω are well-defined (finite) matrices. For simplicity, we denote ([varphi],ω0 ) and ([varphi]0 ,ω0 ) by [varphi] and [varphi]0 , respectively, in this section. The following notations will be used in the proofs. Let [figure omitted; refer to PDF] To prove Proposition 1, we first prove the following lemmas.
Lemma 3.
Under the conditions of Proposition 1, as n[arrow right]∞ , [figure omitted; refer to PDF]
Proof of Lemma 3.
For part (i), we may write [figure omitted; refer to PDF] For K1 , we have [figure omitted; refer to PDF] where Zt =wtQt , bnt =Gh ([straight epsilon]t )-E(Gh ([straight epsilon]t )) . The second term of (21) is O(log...n/n) a.s. by the ergodicity. Now turning to the first term, we suppose that Qt is the first element Ut-1 without loss of generality. Note that, for each n...5;u+1 , {Ztbnt ,Ft ,u+1...4;t...4;n} is a sequence of martingale differences with |Ztbnt |...4;C , where Ft =σ([straight epsilon]s ,s...4;t) . For some C0 >0 , by the ergodicity, we have [figure omitted; refer to PDF] Set y=C(E(Zt2 )+C0 )n ; by Theorem 1.2A in [16], for all A>0 , we have [figure omitted; refer to PDF] Choosing A such that A2 >2C(E(Zt2 )+C0 ) , by the Borel-Cantelli lemma, the first term of (21) is O(log...n/n) a.s. Thus, K1 is O(log...n) a.s . For K2 , by Davis [2], it holds that |[straight epsilon]t -[straight epsilon]t ([varphi]0 )|...4;ξt , and ||At ([varphi]0 )-Qt ||...4;ξt , where ξt =C∑j=t∞ ...rj |yt-j | for some 0<r<1 . Therefore, [figure omitted; refer to PDF] Thus, K2 is O(log...n) a.s . For K3 , we have [figure omitted; refer to PDF] because we have the facts that ||At ([varphi]0 )||...4;C∑j=1t-1 ...rj |yt-j | (see [2]) and (t+l)-α ...4;2-α (tl)-α/2 for t>0 , l>0 . Thus, K3 is also O(log...n) a.s . Therefore part (i) holds. For the proof of part (ii), we may write [figure omitted; refer to PDF] where A1t ([varphi])=-∂[straight epsilon]t ([varphi])/∂[varphi] . For D1 , we may write [figure omitted; refer to PDF] Note that [figure omitted; refer to PDF] where Tt =wtQtQ1tτ and cnt =K([straight epsilon]t /h)+K(-[straight epsilon]t /h)-E[K([straight epsilon]t /h)+K(-[straight epsilon]t /h)] . The second term of (28) is -2f(0)Σ1 a.s. by the ergodicity. We will prove that the first term is o(1) a.s. We suppose that Qt is the first element Ut-1 without loss of generality. Note that, for each n...5;u+1 , {Ttcnt ,Ft ,u+1...4;t...4;n} is a sequence of martingale differences with |Ttcnt |...4;C , and [figure omitted; refer to PDF] where C0 >0 is a constant. Set y=nC(f(0)C0 h+O(h2 )) ; by Theorem 1.2A in [16], for all [straight epsilon]>0 , we have [figure omitted; refer to PDF] The result follows from the Borel-Cantelli lemma. Thus D11 =-2f(0)Σ1 +o(1) a.s. Similar to K2 and K3 , we have [figure omitted; refer to PDF] Therefore, D1 =-2f(0)Σ1 +o(1) a.s . For D2 , from the definition of At ([varphi]) , it holds for t>max...(p,q) that [figure omitted; refer to PDF] where B is the backshift operator. For t=0,±1,±2,... , define [figure omitted; refer to PDF] where Qt,i is the i th component of Qt . Put Xt =(Xt(i,j) ) ; similar to [13], we have that ||∂At ([varphi]0 )/∂[varphi]τ ||...4;C∑j=1t-1 ...rj |yt-j | , ||Xt ||...4;C∑j=1∞ ...rj |yt-j | , and ||Xt -∂At ([varphi]0 )/∂[varphi]τ ||...4;ξt . Then, we may write [figure omitted; refer to PDF] Similar to D11 and D12 , we have that D21 and D22 are o(1) a.s. This completes the proof.
Lemma 4.
Under the conditions of Proposition 1, as n[arrow right]∞ , [figure omitted; refer to PDF] hold uniformly in B .
Proof of Lemma 4.
For part (i), from [2], we have that ||∂At ([varphi])/∂[varphi]τ ||...4;C∑j=1t-1 ...rj |yt-j | and ||At ([varphi])||...4;C∑j=1t-1 ...rj |yt-j | uniformly hold in the ball B for sufficiently large n . Then, for each [varphi]∈B , we have [figure omitted; refer to PDF] Thus, part (i) holds. For part (ii), similar to the proof of Lemma 3, we have that S([varphi]0 )=Ω+o(1) a.s . For each [varphi]∈B , by Taylor expansion, we have [figure omitted; refer to PDF] where [varphi]* lies between [varphi]0 and [varphi] . For T1 , we have [figure omitted; refer to PDF] Similarly, we have T2 [arrow right]a.s.0 and T3 [arrow right]a.s.0 . This completes the proof.
Proof of Proposition 1.
For [varphi]∈B , by Taylor expansion, [figure omitted; refer to PDF] where [varphi]* lies between [varphi]0 and [varphi] . Note that the final term on the right side of (39) can be written as [figure omitted; refer to PDF] which is o(δn ) a.s., where δn =||[varphi]-[varphi]0 || , because dn2 /h3 =1/n2β-3γ [arrow right]0 and [figure omitted; refer to PDF] The third term on the right side of (39) can be written as [figure omitted; refer to PDF] which is also o(δn ) a.s., because dn /h=1/nβ-γ [arrow right]0 , and [figure omitted; refer to PDF] by a similar proof of Lemma 3. Therefore, [figure omitted; refer to PDF] uniformly about [varphi]∈B . Denote [varphi]=[varphi]0 +μdn , for [varphi]∈{[varphi]:||[varphi]-[varphi]0 ||=dn } , where ||μ||=1 . Now, we give a lower bound for lh ([varphi]) on the surface of the ball. Similar to [6], by Lemmas 3 and 4, we have [figure omitted; refer to PDF] where c-[straight epsilon]>0 and c is the smallest eigenvalue of 4f2 (0)Σ1τΩ-1Σ1 . Similarly, [figure omitted; refer to PDF] Since lh ([varphi]) is a continuous function about [varphi] as [varphi] belongs to the ball B , lh ([varphi]) attains its minimum value at some point [varphi]~ in the interior of this ball, and [varphi]~ satisfies ∂lh ([varphi]~)/∂[varphi]=0 , it follows that (12) holds. This completes the proof.
Proof of Theorem 2.
Similar to the proof of Theorem 2 of Qin and Lawless [17], we have [figure omitted; refer to PDF] where [figure omitted; refer to PDF] By the standard arguments in the proof of empirical likelihood (see [6]), we have [figure omitted; refer to PDF] where Δ=4f2 (0)Σ1τΩ-1Σ1 . Since n-uQnh ([varphi]0 )[arrow right]dN(0,Ω) and [figure omitted; refer to PDF] it follows that lp (ω0 )[arrow right]dχm2 .
4. Simulation Studies
We generated data from a simple ARMA(1, 1) model yt =[straight phi]1yt-1 +[straight epsilon]t +[vartheta]1[straight epsilon]t-1 , with N(0,1) , t2 , and Cauchy innovation distribution. We set u=20 , α=3 , and the true value ([straight phi]1 ,[vartheta]1 )=(0.4,0.7) or (-0.5,0.7) , where [straight phi]1 is the parameter of interest. The sample size n=50,100,150,200 , and 2,000 replications are conducted in all cases. We smooth the estimating equations using kernel [figure omitted; refer to PDF] where σ=0.1 , which is the so-called Gaussian kernel. The coverage probabilities of smoothed empirical likelihood confidence regions Ihc with the bandwidth h=1/nγ are denoted by EL(γ ), where γ=0.27,0.30,0.32 , respectively.
As another benchmark of the simulation experiments, we consider the confidence regions based on the asymptotic normal distribution of WLADE proposed by [3]. To construct the confidence regions, we need to estimate f(0) , Σ , and Ω . We can estimate f(0) by [figure omitted; refer to PDF] where K~(x)=exp...(-x)/(1+exp...(-x))2 is a kernel function on R and bn =1/nν is a bandwidth, σ^w =(n-u)-1∑t=u+1n ...w~t . Σ and Ω can be estimated, respectively, by [figure omitted; refer to PDF] where Q^t is defined in the same manner as Qt , θ0 is replaced by θ^ , and [straight epsilon]t is replaced by [straight epsilon]t (θ^) ; see (14). Based on this, we can construct a NA confidence region (i.e., based on the normal approximation of WLADE). The coverage probabilities of confidence regions INA based on the bandwidth bn =1/nν are denoted by NA(ν ), with ν=0.25,0.20 , respectively. Tables 1, 2, and 3 show the probabilities of the confidence intervals of [straight phi]1 at confidence levels 0.9 and 0.95, respectively.
Table 1: The coverage probability of confidence intervals when [straight epsilon]t ~N(0,1) .
| n | EL(0.27) | EL(0.30) | EL(0.32) | NA(0.25) | NA(0.20) |
a = 0.9 [straight phi]1 =0.4 | 50 | 0.8818 | 0.8820 | 0.8822 | 0.8193 | 0.7875 |
100 | 0.8898 | 0.8896 | 0.8897 | 0.8655 | 0.8431 | |
150 | 0.8926 | 0.8927 | 0.8932 | 0.8692 | 0.8395 | |
200 | 0.8983 | 0.8983 | 0.8986 | 0.8666 | 0.8363 | |
| ||||||
a = 0.9 [straight phi]1 =-0.5 | 50 | 0.8888 | 0.8885 | 0.8892 | 0.8156 | 0.7813 |
100 | 0.8967 | 0.8968 | 0.8973 | 0.8738 | 0.8448 | |
150 | 0.8943 | 0.8944 | 0.8946 | 0.8823 | 0.8574 | |
200 | 0.8972 | 0.8979 | 0.8977 | 0.8931 | 0.8692 | |
| ||||||
a = 0.95 [straight phi]1 =0.4 | 50 | 0.9347 | 0.9350 | 0.9350 | 0.8724 | 0.8425 |
100 | 0.9424 | 0.9430 | 0.9431 | 0.9123 | 0.8862 | |
150 | 0.9467 | 0.9471 | 0.9470 | 0.9157 | 0.8936 | |
200 | 0.9494 | 0.9494 | 0.9497 | 0.9160 | 0.8937 | |
| ||||||
a = 0.95 [straight phi]1 =-0.5 | 50 | 0.9404 | 0.9404 | 0.9404 | 0.8705 | 0.8407 |
100 | 0.9472 | 0.9474 | 0.9474 | 0.9134 | 0.8931 | |
150 | 0.9481 | 0.9479 | 0.9476 | 0.9248 | 0.9052 | |
200 | 0.9495 | 0.9495 | 0.9490 | 0.9326 | 0.9152 |
Table 2: The coverage probability of confidence intervals when [straight epsilon]t ~t2 .
| n | EL(0.27) | EL(0.30) | EL(0.32) | NA(0.25) | NA(0.20) |
a = 0.9 [straight phi]1 =0.4 | 50 | 0.8774 | 0.8776 | 0.8779 | 0.7627 | 0.7290 |
100 | 0.8850 | 0.8849 | 0.8856 | 0.8323 | 0.7995 | |
150 | 0.8919 | 0.8917 | 0.8924 | 0.8467 | 0.8157 | |
200 | 0.8935 | 0.8932 | 0.8932 | 0.8510 | 0.8229 | |
| ||||||
a = 0.9 [straight phi]1 =-0.5 | 50 | 0.8902 | 0.8902 | 0.8899 | 0.7295 | 0.7055 |
100 | 0.8942 | 0.8944 | 0.8950 | 0.8151 | 0.7836 | |
150 | 0.8946 | 0.8941 | 0.8937 | 0.8473 | 0.8204 | |
200 | 0.8965 | 0.8963 | 0.8961 | 0.8641 | 0.8392 | |
| ||||||
a = 0.95 [straight phi]1 =0.4 | 50 | 0.9331 | 0.9327 | 0.9327 | 0.8163 | 0.7854 |
100 | 0.9412 | 0.9412 | 0.9413 | 0.8834 | 0.8542 | |
150 | 0.9447 | 0.9446 | 0.9447 | 0.8965 | 0.8718 | |
200 | 0.9456 | 0.9458 | 0.9459 | 0.9001 | 0.8762 | |
| ||||||
a = 0.95 [straight phi]1 =-0.5 | 50 | 0.9421 | 0.9418 | 0.9418 | 0.7963 | 0.7658 |
100 | 0.9455 | 0.9462 | 0.9460 | 0.8695 | 0.8430 | |
150 | 0.9444 | 0.9442 | 0.9440 | 0.8910 | 0.8695 | |
200 | 0.9464 | 0.9463 | 0.9464 | 0.9072 | 0.8863 |
Table 3: The coverage probability of confidence intervals when [straight epsilon]t ~Cauchy .
| n | EL(0.27) | EL(0.30) | EL(0.32) | NA(0.25) | NA(0.20) |
a = 0.9 [straight phi]1 =0.4 | 50 | 0.8360 | 0.8358 | 0.8361 | 0.6345 | 0.6018 |
100 | 0.8708 | 0.8708 | 0.8702 | 0.7286 | 0.6942 | |
150 | 0.8811 | 0.8819 | 0.8823 | 0.7614 | 0.7270 | |
200 | 0.8870 | 0.8864 | 0.8868 | 0.7800 | 0.7467 | |
| ||||||
a = 0.9 [straight phi]1 =-0.5 | 50 | 0.8673 | 0.8679 | 0.8678 | 0.5818 | 0.5502 |
100 | 0.8869 | 0.8870 | 0.8874 | 0.6756 | 0.6440 | |
150 | 0.8972 | 0.8974 | 0.8974 | 0.7232 | 0.6867 | |
200 | 0.8956 | 0.8955 | 0.8953 | 0.7510 | 0.7156 | |
| ||||||
a = 0.95 [straight phi]1 =0.4 | 50 | 0.8992 | 0.8994 | 0.8996 | 0.6888 | 0.6557 |
100 | 0.9263 | 0.9261 | 0.9263 | 0.7870 | 0.7530 | |
150 | 0.9362 | 0.9366 | 0.9367 | 0.8176 | 0.7857 | |
200 | 0.9402 | 0.9398 | 0.9399 | 0.8350 | 0.8044 | |
| ||||||
a = 0.95 [straight phi]1 =-0.5 | 50 | 0.9235 | 0.9239 | 0.9240 | 0.6378 | 0.6065 |
100 | 0.9422 | 0.9420 | 0.9420 | 0.7342 | 0.7030 | |
150 | 0.9474 | 0.9479 | 0.9481 | 0.7820 | 0.7530 | |
200 | 0.9467 | 0.9471 | 0.9470 | 0.8057 | 0.7778 |
The simulation results can be summarized as follows. The coverage probabilities of NA(ν ) are much smaller than the nominal levels and very sensitive to the choice of bandwidth bn and [straight epsilon]t . On the other hand, the coverage probabilities of EL(γ ) are much better and less sensitive to the choice of bandwidth h and [straight epsilon]t . As the sample size n increases, the coverage probabilities for both increase to the nominal levels, as one might expect.
5. Conclusions
This paper explores a profile empirical likelihood method to construct confidence regions for the partial parameters of interest in IVARMA models. We started with the foundation of estimating equations of WLADE; then from there, we derived smoothed empirical likelihood. Moreover, we have proved that the resulting statistics has asymptotic standard chi-squared distribution. Hence there is no need to estimate any additional quantity such as the asymptotic variance. The simulations indeed show that the proposed method has a good finite sample behavior, which experimentally confirms our method.
Acknowledgments
Li's research is partially supported by the Fundamental Research Funds for the Central Universities (no. 2013XK03). He's work is partially supported by the National Natural Science Foundation of China (nos. 11171230 and 11231010).
Conflict of Interests
The authors declare that they have no conflict of interests.
[1] T. Mikosch, T. Gadrich, C. Kluppelberg, R. J. Adler, "Parameter estimation for ARMA models with infinite variance innovations," The Annals of Statistics , vol. 23, no. 1, pp. 305-326, 1995.
[2] R. A. Davis, "Gauss-Newton and M -estimation for ARMA processes with infinite variance," Stochastic Processes and their Applications , vol. 63, no. 1, pp. 75-95, 1996.
[3] J. Pan, H. Wang, Q. Yao, "Weighted least absolute deviations estimation for ARMA models with infinite variance," Econometric Theory , vol. 23, no. 5, pp. 852-879, 2007.
[4] K. Zhu, S. Ling, "The global LAD estimators for finite/infinite variance ARMA(p,q ) models," Econometric Theory , vol. 28, no. 5, pp. 1065-1086, 2012.
[5] A. B. Owen, "Empirical likelihood ratio confidence intervals for a single functional," Biometrika , vol. 75, no. 2, pp. 237-249, 1988.
[6] A. B. Owen, "Empirical likelihood ratio confidence regions," The Annals of Statistics , vol. 18, no. 1, pp. 90-120, 1990.
[7] A. B. Owen Empirical Likelihood , Chapman and Hall, London, UK, 2001.
[8] A. C. Monti, "Empirical likelihood confidence regions in time series models," Biometrika , vol. 84, no. 2, pp. 395-405, 1997.
[9] C. Chuang, N. H. Chan, "Empirical likelihood for autoregressive models, with applications to unstable time series," Statistica Sinica , vol. 12, no. 2, pp. 387-407, 2002.
[10] N. H. Chan, L. Peng, Y. Qi, "Quantile inference for near-integrated autoregressive time series with infinite variance," Statistica Sinica , vol. 16, no. 1, pp. 15-28, 2006.
[11] J. Li, W. Liang, S. He, X. Wu, "Empirical likelihood for the smoothed LAD estimator in infinite variance autoregressive models," Statistics & Probability Letters , vol. 80, no. 17-18, pp. 1420-1430, 2010.
[12] J. Li, W. Liang, S. He, "Empirical likelihood for LAD estimators in infinite variance ARMA models," Statistics & Probability Letters , vol. 81, no. 2, pp. 212-219, 2011.
[13] P. J. Brockwell, R. A. Davis Time series: Theory and Methods , Springer-Verlag, New York, NY, USA, 1991., 2nd.
[14] P. C. B. Phillips, "A shortcut to LAD estimator asymptotics," Econometric Theory , vol. 7, no. 4, pp. 450-463, 1991.
[15] B. W. Silverman Density Estimation for Statistics and Data Analysis , Chapman &Hall, London, UK, 1986.
[16] V. H. de la Peña, "A general class of exponential inequalities for martingales and ratios," The Annals of Probability , vol. 27, no. 1, pp. 537-564, 1999.
[17] J. Qin, J. Lawless, "Empirical likelihood and general estimating equations," The Annals of Statistics , vol. 22, no. 1, pp. 300-325, 1994.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Jinyu Li et al. Jinyu Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper proposes a profile empirical likelihood for the partial parameters in ARMA(p,q) models with infinite variance. We introduce a smoothed empirical log-likelihood ratio statistic. Also, the paper proves a nonparametric version of Wilks's theorem. Furthermore, we conduct a simulation to illustrate the performance of the proposed method.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer