Content area
In this paper, we consider an inference problem in a tensor regression model with one change-point. Specifically, we consider a general hypothesis testing problem on a tensor parameter and the studied testing problem includes as a special case the problem about the absence of a change-point. To this end, we derive the unrestricted estimator (UE) and the restricted estimator (RE) as well as the joint asymptotic normality of the UE and RE. Thanks to the established asymptotic normality, we derive a test for testing the hypothesized restriction. We also derive the asymptotic power of the proposed test and we prove that the established test is consistent. Beyond the complexity of the testing problem in the tensor model, we consider a very general case where the tensor error term and the regressors do not need to be independent and the dependence structure of the outer-product of the tensor error term and regressors is as weak as that of an
Introduction
In this paper, we study an inference problem in a tensor regression model with change-points. Specifically, the work of this paper extends the results of Ghannam (2022) for the case where the model has only one change-point. To give some references on estimating and testing for change-points in the context of regression models, we quote (Quandt 1958) who suggested a maximum likelihood estimating procedure in the context of a linear regression model with two separate regimes. Further, Bai and Perron (2003) presented an efficient dynamic programming algorithm to efficiently estimating and testing the existence of break dates in the linear model with multiple change-points, furthering the results presented in Bai and Perron (1998). Perron and Qu (2006) extended results for linear regression models with structural changes under some arbitrary restrictions imposed on the parameter coefficients. We also quote (Qu and Perron 2007), who proposed a likelihood ratio-type test for no change-points versus a certain number of change-points for a multiple structural change models. More recently, Lee et al. (2015) proposed a lasso estimator of regression coefficients in the context of a high-dimensional linear regression model with a possible change-point. Zhang et al. (2015) proposed sparse group lasso to estimate multiple change-points in a linear regression model and Döring and Jensen (2015) considered the problem of estimating a jump change-point in the context of a linear regression model with a smooth regression function. In addition, Wang and Zhao (2022) proposed a quadratic-form- based CUSUM test to inspect the stability of the regression coefficients in a high-dimensional linear model. Ma et al. (2022) considered the problem of multiple change-points detection in a high-dimensional multivariate regression model. To give other interesting references on change-point inference in heteroscedastic time series or in functional data, we also quote (Aue et al. 2006), Aue et al. (2009) and Górecki et al. (2018) and references therein.
Our work differs in several ways. First, unlike the quoted papers, we consider a change-point detection problem in the context of a tensor regression model. In addition, the restriction in Ghannam (2022) motivates the testing problem that includes a test for the non-existence of a change-point as a special case. Second, under some weak regularity conditions, we derive a functional central limit theorem type result to show that the error-regressor terms converge to a Gaussian process and we derive the joint asymptotic distribution of the UE and RE. Third, by using the established joint asymptotic normality of the UE and RE, we derive the asymptotic distribution of the proposed test statistic and we construct a test for testing the restriction. Finally, the proposed methods are applied to some simulation studies as well as an fMRI dataset.
The remainder of this paper is organized as follows. In Sect. 2, we introduce the statistical model and the main regularity conditions. In Sect. 3, we derive the joint asymptotic distribution of the UE and RE. In Sect. 4, we propose a test for a multi-mode hypothesis and we give a test for detecting the non-existence of a change-point. In Sect. 5, we present some numerical results. Namely, we conduct some simulation studies to apply the hypothesis test and to corroborate the theoretical results. We also present in this section a data analysis of an fMRI neuroimaging dataset. Moreover, we present in supplementary file additional theoretical and simulation results.
Statistical model and preliminary results
In this section, we present the statistical model as well as the main regularity conditions. In the following subsection, we define some notations that will be used throughout the paper.
Notations
To set up some notations, let represent the concactenation/stacking of the equal-sized tensors and along the d-th dimension and let represent the mode-(d) matrix product of a tensor by a matrix. Let denote the mode-n matrix of the tensor . For more details about mode-(d) tensor-matrix product, we refer to Kolda and Bader (2009) and Kolda (2006). Further, let denote the Kronecker product of two matrices A and B. Let be the tensor product of two tensors and . Note that for the special case where and are vectors, this tensor product becomes the vector outer product. For more information on the tensor product, we refer to Kolda (2006) and Kolda and Bader (2009). For a tensor and matrices , let and let . Let be a complete filtration and denote the space of all k-column vectors of functions which are right continuous with left limits on For two d-dimensional random tensors and , define , and for the sake of simplicity, for , let and note that . Let denote the Euclidean/Frobenius norm of a tensor, i.e. and let denote the integer part of a real number x. Furthermore, let be a non-central chi-squared random variable with degrees of freedom and non-centrality parameter and let be a central chi-squared random variable with l degrees of freedom. We denote to be the the indicator function of the event A and we define as the th quantile of a random variable for some In addition, to simplify some asymptotic results, let denote a random variate (r.v.) such that converges in probability to 0, let denote a r.v. such that is bounded in probability. Similarly, let o(a) denote a non-random quantity such that o(a)/a converges to 0, and O(a) denote a non-random quantity such that O(a)/a is bounded. Further, the notations , , and stand for convergence in , convergence almost surely, convergence in probability and convergence in distribution, respectively.
Statistical model
In this paper, we consider the tensor regression model with T observations and one unknown change-point, say, . For convenience, let =1 and and set Let
2.1
where , , , with , and , is a -column vector for . Here, and are random and is deterministic.Further, in order to incorporate some prior knowledge, we consider the case where may satisfy the following restriction
2.2
where for , is a known matrix with rank , and is a known known matrix with rank and is a known tensor. The restriction in (2.2) is motivated by some previous statistical investigations that may suggest that some components of the tensor parameter, are statistically insignificant or that there is a certain known association between some components of the tensor parameter. Moreover, the restriction in (2.2) is useful in region selection which can be applied for a vast selection of data including MRI and fMRI neuro-images. This restriction also motivates the testing problem2.3
Note that for a suitable choice of and this testing problem can cover many interesting cases. In particular, by taking , and one can test the non-existence of the change-point. We present a practical application of the restrictions on fMRI data in Sect. 5.2.Remark 2.1
We note that the tensor model in (2.1) is the generalization of the multivariate regression model presented in Chen and Nkurunziza (2016) for the special case where there is only one change-point. Indeed, if we set then the resulting model is equivalent to the model considered in Chen and Nkurunziza (2016) with . Moreover, the restriction in (2.2) is also a generalization of the restriction considered in Chen and Nkurunziza (2016), for the matrix parameter case. To see this, let and and and let Then, the above condition reduces to where
Preliminary results: the known change-points case
The unrestricted estimator and the restricted estimator
In this section, we give some preliminary results in the context where is known. In particular, we present the unrestricted estimator (UE) and the restricted estimator (RE). In the proposition below, we recall Proposition 3.1.1 of Ghannam (2022) which defines the UE and RE of for known change-point. For simplicity, let denote the UE of when is known. Similarly, let denote the RE of when is known. Let , , let , let for let , and let .
Theorem 2.1
The UE and the RE are respectively given by .
The proof of Theorem 2.1 is given in Ghannam (2022).
Estimation in the case of unknown change-points
In this section, we outline the estimation method of when the location of the change-point is unknown. This follows from the estimation method proposed in Ghannam (2022) for the special case where Nevertheless, for the convenience of the reader, we outline the estimation method for . In similar ways as in Chen and Nkurunziza (2016), one estimates the unknown parameter and by minimizing the least squares objective function. This gives the UE of and . Let and denote the UE and RE of the true change-point from the unrestricted and restricted least squares, respectively. Also, let and be the UE and RE for the regression coefficient tensor , respectively. Let and be the Frobenius-norm of residuals from the UE and RE least squares regression model evaluated at the partition , respectively. We have
2.4
The minimization of (2.4) needs to be done numerically by using the dynamical programming algorithm provided in Ghannam (2022).Asymptotic results
About the structure of the noise and the regressors
In this section, we derive some technical results underlying the structure of the errors term and the regressors in the context of a tensor model with one change-point. The established results are useful in deriving the joint asymptotic normality between the UE and the RE in the context of a tensor model with one change-point. In the following, we present some conditions for deriving the joint asymptotic normality of the UE and the RE, which is an important step for the proposed inference method.
Assumption 1
, a non-random positive definite matrix uniformly in .
There exists an such that for all , the minimum eigenvalues of and are bounded away from 0.
The matrix is invertible for for some .
, where and .
There exist some and such that and we assume and
The first statement in Assumption 1 allows us to overcome the problem of unit root regressors while the second statement in Assumption 1 allows us to avoid the local collinearity problem, that induces the identifiability of the change-point. Assumption1 (iii) guarantees the existence of the tensor estimate. It should be noted that, in the model without a change-point, this condition corresponds to the classical assumption of full rank matrix of regressors. The role of the fourth part of Assumption1 is to guarantee that, in the case of existence of a change-point, the length of each regime is proportional to T and this condition implies that the location of the change-point is asymptotically different from T. Part (v) of Assumption1 allows us to separate the interval into or cells, which is useful in the derivation of some asymptotic results.
For each segment, set
3.1
where . We make the following assumption onAssumption 2
There exists a positive and increasing function such that, for some , for all and .
There exist sequences of non-negative real numbers and , such that and
3.2
Assumption 2 defines the dependence structure of the error and the regressors. This condition is so general that it holds for classical (univariate or multivariate) regression models. Note that the role of the condition in (3.2) is to weaken as far as possible the dependence structure of the tensor error term and the regressors. In particular, for the special case where , for some , (3.2) indicates that is an mixingale array of size .
To simplify the derivation of a tensor type central limit theorem for tensor mixingale arrays of size , we make the following two additional assumptions.
Assumption 3
is uniformly integrable, ,
, for some .
.
Moreover, let and set
Assumption 4
, for and with and
, and where , is positive definite matrix for and is an positive definite matrix.
In summary, Assumptions 1–4 are very general and hold for a vast array of statistical models. In particular, these assumptions are such that the proposed test can be applied to the classical models where the errors are assumed to be independent and identically distributed as well as to the case where the errors are neither independent nor identically distributed. Further, there is no requirement about the distribution for the tensor noise and we do not require the independence between the error term and the matrix of regressors . Specifically, Assumption 2 shows that the dependence structure of the noise and the regressors is much weaker than what is seen in the literature. Indeed, this condition admits a vast array of possible applications, including many auto-correlated and heteroskedastic models. For some works with similar and/or related assumptions, we quote (McLeish 1977; Qu and Perron 2007; Davidson 1994; Andrews and Pollard 1994; Chen and Nkurunziza 2016) and references therein.
Remark 3.1
We note that Along with Assumption 1-4, this decomposition will be used to derive the joint asymptotic distribution of the UE and RE under the weak mixingale dependence condition in Assumption 2.
The condition is equivalent to and thus, under Assumption 1, It follows that , where is a non-random, positive definite matrix.
Under Assumption 4, converges in probability to a non-random matrix .
To simplify some notations, we define the following tensor stochastic process
3.3
where and In the following lemma, we establish the asymptotic normality of the tensor-error term, which is useful in deriving the asymptotic normality of the UE and the RE of the tensor model.Lemma 3.1
If Assumption 1– 4 hold, then and for each , where
The proof of Lemma 3.1 is outlined in Appendix A. Lemma 3.1 is crucial for the derivation of the asymptotic power of the proposed test statistic for the testing problem in (2.3). Below, we prove that, under additional conditions, where is as defined in (3.3), converges weakly to a Gaussian process. As an intermediate result, we first establish the following lemma.
Lemma 3.2
Under Assumption 1– 4, if
3.4
then is a uniformly integrable set for some sequence S of approaching 0 and nonrandom finite valued functionThe proof of Lemma 3.2 is outlined in Appendix A. From Lemma 3.2, we establish the following functional central limit theorem-type result that generalizes Lemma 3.1.
Theorem 3.1
Suppose that the conditions of Lemma 3.2 hold. Then,
is tight in Stone’s topology on
For each , the set is uniformly integrable;
The weak limit process of any convergent subsequence of is almost surely continuous.
Suppose that for all along with the conditions of Lemma 3.2. Then, where is a (tensor) Gaussian process with almost surely continuous paths and independent increments.
The proof of this result is given in the Appendix A.
Remark 3.2
We note that Theorem 3.1 is a stronger result than Lemma 3.1. Specifically, Lemma 3.1 follows immediately from Theorem 3.1 provided that the condition (3.4) is also satisfied in addition to Assumptions 1–4. In particular, under Assumption 1–4 and condition (3.4), we have by Theorem 3.1, where with .
Asymptotic properties of the UE and the RE
In this subsection, we present some asymptotic properties of the UE and RE. The established results are useful in deriving the asymptotic distribution of the proposed test statistic for the testing problem in (2.3). The following proposition gives the asymptotic distribution of the UE. In the sequel, let with as defined in Assumption 4.
Theorem 3.2
Let . Under Assumption 1-4, we have with
The proof of this result is outlined in Appendix A. It should be noted that if condition (3.4) holds, then Theorem 3.2 can be established directly from Theorem 3.1. Indeed, under condition (3.4), we may simply replace Lemma 3.1 by Remark 3.2 in the proof of Theorem 3.2 and the result would follow. By using Proposition 3.2, we derive the joint asymptotic normality of the RE and the UE. In particular, the joint asymptotic normality is established in the context where the restriction in (2.2) may not hold. The established result plays an important role in deriving an asymptotic test and its asymptotic power. In passing, note that under a fixed alternative hypothesis, converges to infinity as T tends to infinity and then, also converges to infinity as T tends to infinity. Thus, under a fixed alternative, it is impossible to evaluate the asymptotic optimality of the proposed test. As such, we need to consider some neighbourhoods of the null hypothesis in (2.3). More precisely, we consider the following sequence of local alternative restrictions
3.5
where is an tensor with To introduce some notation, let, and let ,
, for ,
,,
,
Theorem 3.3
Under Assumption 1–4 and (3.5),where
Further, .
The proof of this proposition follows from Theorem 3.2 along with the vectorization operator and some other algebraic computations. A detailed proof can also be found in Ghannam (2022).
Main result: the proposed test
In this section, we give a test for the hypothesis in (2.3) based on the properties of the joint asymptotic normality of the UE and RE. By using Theorem 3.3, we establish the following proposition which can be used for testing the restriction in (2.3). To this end, let and for , let , where is a consistent estimator of Further, let and be consistent estimators of and , respectively and let . Furthermore, let and let ,
Theorem 4.1
Suppose that the conditions of Theorem 3.3 hold and let . If then Moreover, if then
The proof of Theorem 4.1 is given in Appendix A. By using Theorem 4.1, we construct a test for testing the restriction in (2.3). Specifically, to solve the hypothesis testing problem in (2.3), we suggest to use the test statistic as defined in Theorem 4.1. Thus, we propose the following test
4.1
Remark 4.1
To test for the non-existence of a change-point, we take , and , for and
From Theorem 4.1, we get the following corollary which establishes the asymptotic power of the test in (4.1).
Corollary 4.1
Suppose that the conditions of Theorem 4.1 hold, then the asymptotic power function of the test in (4.1) is given by
The proof follows from Theorem 4.1.
Simulation studies and analysis of a real dataset
In this section, we present some simulation results. In particular, we present the simulation results which confirm the performance of the proposed test in small and medium sample sizes. Further, in the second subsection, we analyze an FMRI dataset.
Simulation studies
In this section, we present some simulation results which confirm the performance of the proposed test in small and medium sample sizes. In particular, we test the non-existence of a change-point for some simulated data.
To this end, we set the true tensor parameter, , to be an square-centred image with zero entries at the border and the centre entries are generated from a uniform distribution with parameters 0 and 1. Thus, we set , , and let be (which is equivalent to matrix) and we run the following simulation for and 400. We let covariates be scalars randomly drawn from a uniform distribution on the interval (0,1.5) and the resulting matrix of covariates becomes . The error terms are randomly drawn from a normal distribution with mean 0 and variance 1 and the resulting stacked error term along the dimension, , is a dimensional tensor in which the face corresponds to the error term of the response matrix. Stacking along the dimension gives the response tensor which is with Note that the true number of change-points is and hence, we will study the empirical power of the test for the non-existence of a change-point as given in (4.1) and Remark 4.1. Hence, we set , and to incorporate some additional prior information, we set and where denotes the zero matrix with i rows and j columns. is set to be tensor of zeroes. The mode-1 and mode-2 matrices, and , are set as defined above since the parameter signal is concentrated in the middle of the image and the first 8 rows and the first 2 columns are known to be 0. We run the simulation with deviating from 0 by units of at each iteration, i.e. , where and is tensor of ones. In this case, the hypothesis in (2.3) becomesFor each restriction, we compute the UE and the RE and we test the above restriction at significance levels and . For each deviation, , we replicate the simulation 1000 times and obtain the empirical power at each significance level. The results of the simulations are displayed in Fig. 1 (and Figures 1–2 in the supplementary file). As can be seen Figs. 1 (and Figures 1-2 in the supplementary file), the test is consistent for all significance levels. However, as pointed out by one of the referees, there is some concern about the size of the test. In particular, for the cases where , the size of the proposed test seems extremely lower than the significance level. Further, for the cases where , or , the size of the proposed test seems slightly higher than the significance level. These results seem to indicate that one needs a large value of T for the size of the proposed test to be close to the significant level. Indeed, the visual portrayal of Fig. 2 seems to confirm this fact. We also note that there was an issue in the consistency of the empirical power function when using higher resolutions. Specifically, we found that even under the local alternative hypotheses in which , the null hypothesis was not rejected for higher resolutions. This resulted in a flat empirical power function versus plot. Nevertheless, this problem is part of our ongoing research.
[See PDF for image]
Fig. 1
The empirical power versus for and . The data is generated with , under local alternative with , , and is tensor of ones. We set and and for each T and , the empirical power was derived from 1000 data replications
[See PDF for image]
Fig. 2
The empirical power versus for at the three significance levels. The data is generated with , under local alternative with , , and is tensor of ones. We set and and for each and , the empirical power was derived from 1000 data replications
Analysis of FMRI dataset
In this subsection, we illustrate the application of the proposed method to the real dataset. The analyzed real dataset is the fMRI neuro-imaging data that can be found in Ghannam (2022). In particular, given this dataset, our goal is to to test for the non-existence of a change-point. Specifically, we use the 1000 connectome resting state functional magnetic resonance imaging or fMRI (Biswal et al. (2010)). We use the Beijing scan site data which consist of 198 resting state fMRI scans, where three-dimensional images of size voxels are taken over 225 time points and every pair of consecutive time points are 2 s apart. We also include the age and gender as covariates. Following Remark 4.1, we set and we set to be the tensor of zeros. Here, the hypothesis in (2.3) isWe run the test in (4.1) on the fMRIs of several subjects and find that the test rejects the non-existence of a change-point for all of the subjects we have tested. In other words, the test detected a non-stationarity in the fMRI brain scans for all of the subjects. This result contradicts some visual conclusions made in Aston and Kirch (2012), where some subjects were found to have no deviations from non-stationarity. For example, the test in this paper rejected the non-existence of a change-point in the fMRI scan of Subject 69518 and, hence, may exhibit deviations from non-stationarity. However, Aston and Kirch (2012) claim that there is no deviation from non-stationarity in the fMRI scan for this subject. In Table 1, we summarize the test results of several subjects including the test statistic, value and the estimated location of the change-point/non-stationarity. Given the nature of the analyzed dataset, the proposed method was applied to the case of deterministic regressors such as age and gender. Nevertheless, from Assumption 1–Assumption 4, the proposed method is applicable for the cases where the regressors are random variables. The application of the proposed method to the cases of non-deterministic random variables is part of our ongoing research.
Table 1. Results of testing for the non-existence of a change-point on fMRI neuro-images
Subject number | (value) | |
|---|---|---|
00440 | 126 | |
08455 | 54 | |
17315 | 118 | |
48501 | 150 | |
49782 | 92 | |
11072 | 89 | |
69518 | 92 | |
22661 | 83 | |
55541 | 53 |
The size of each fMRI is and the corresponding restriction matrices are For each subject, we present the corresponding test statistic (), value and the estimated location of the change-point ()
Conclusion
In this paper, we considered a testing problem in the context of a tensor regression model with a change-point. The contributions of this paper include the implementation of testing methods using a restriction on the tensor parameter of interest. In addition, the inference methods were established by deriving the asymptotic normality of the UE and RE under a weakened regressor-error dependence structure known as the mixingale. This dependence structure allows for the inclusion of a vast array of models including autocorrelated and heteroskedastic models. The joint asymptotic normality of the UE and RE was crucial in deriving the asymptotic distribution of the proposed test statistic. Moreover, using the asymptotic distribution of the test statistic, we established a test for testing some constraints on the tensor regression parameter. The established test can be used in testing the absence of a change-point. Indeed, the constraint considered includes as a special case the non-existence of a change-point. We also derived, under some sequences of local alternative hypotheses, the asymptotic power of the proposed test and we proved that the established test is consistent. Further, we presented some simulation studies that confirm that the proposed test is consistent.
In the future, the work of this paper can be further expanded in several directions. Namely, tackling the high-dimensionality problem is a crucial step in improving the results of this paper. Specifically, further studies on penalty functions (see, for example, Tibshirani (1996) and Jandhyala et al. (2013)) and/or tensor decomposition algorithms (see, for example, Zhou et al. (2013) and Li et al. (2018)) may prove to be useful in extending the results of this paper to models with high-dimensional data. Another possible extension may involve testing no-change-points versus a fixed number of change-points in the context of tensor regression. Lastly, an interesting challenge that may stem from this paper is to study the case where the number of change-points in the tensor model is unknown and how the inference may be affected by accurate/inaccurate estimation of the true number of change-points.
Acknowledgements
The authors would like to thank the referees and Associate Editor for helpful comments and useful insights. Further, Dr. S. Nkurunziza would like to acknowledge the financial support received from the Natural Sciences and Engineering Research Council of Canada (NSERC).
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Andrews, DWK; Pollard, D. An introduction to functional central limit theorems for dependent stochastic processes. Int Stat Rev; 1994; 62,
Aston, JA; Kirch, C. Evaluating stationarity via change-point alternatives with applications to fmri data. Ann Appl Stat; 2012; 3058688 [DOI: https://dx.doi.org/10.1214/12-AOAS565]
Aue, A; Horvàth, L; Huškovà, M; Kokoszka, P. Change-point monitoring in linear models. Econom J; 2006; 9,
Aue, A; Gabrys, R; Horvàth, L; Kokoszka, P. Estimation of a change-point in the mean function of functional data. J Multivar Anal; 2009; 100,
Bai, J; Perron, P. Estimating and testing linear models with multiple structural changes. Econometrica; 1998; 66,
Bai, J; Perron, P. Computation and analysis of multiple structural change models. J Appl Econom; 2003; 18,
Billingsley P (1968) Convergence of probability measures. Wiley
Biswal, BB; Mennes, M; Zuo, X-N; Gohel, S; Kelly, C; Smith, SM; Beckmann, CF; Adelstein, JS; Buckner, RL; Colcombe, S et al. Toward discovery science of human brain function. Proc Natl Acad Sci; 2010; 107,
Chen, F; Nkurunziza, S. A class of stein-rules in multivariate regression model with structural changes. Scand J Stat; 2016; 43,
Davidson J (1994) Stochastic limit theory: an introduction for econometricians. OUP Oxford
Döring, M; Jensen, U. Smooth change point estimation in regression models with random design. Ann Inst Stat Math; 2015; 67, pp. 595-619.3339193 [DOI: https://dx.doi.org/10.1007/s10463-014-0467-8]
Ghannam M (2022) On estimation methods in tensor regression models. PhD Thesis, University of Windsor
Górecki, T; Horvàth, L; Kokoszka, P. Change point detection in heteroscedastic time series. Econom Stat; 2018; 7, pp. 63-88.3824127
Jacod J, Shiryaev A (1987) Limit theorems for stochastic processes, vol 288. Springer Science & Business Media
Jandhyala, V; Fotopoulos, S; MacNeill, I; Liu, P. Inference for single and multiple change-points in time series. J Time Ser Anal; 2013; 34,
Kolda, T; Bader, B. Tensor decompositions and applications. SIAM Rev; 2009; 51,
Kolda TG (2006) Multilinear operators for higher-order decompositions. Technical Report SAND2006-2081, Sandia National Laboratories, April
Lee, S; Seo, MH; Shin, Y. The Lasso for high dimensional regression with a possible change point. J R Stat Soc Series B Stat Methodol; 2015; 78,
Li, X; Xu, D; Zhou, H; Li, L. Tucker tensor regression and neuroimaging analysis. Stat Biosci; 2018; 10,
Ma, X; Zhou, Q; Zi, X. Multiple change points detection in high-dimensional multivariate regression. J Syst Sci Complex; 2022; 35,
Mathai, A; Provost, S. Quadratic forms in random variables: theory and applications; 1992; New York, Marcel Dekke:
McLeish, DL. On the invariance principle for nonstationary mixingales. Ann Probab; 1977; 5,
Perron, P; Qu, Z. Estimating restricted structural change models. J Econom; 2006; 134,
Qu, Z; Perron, P. Estimating and testing structural changes in multivariate regressions. Econometrica; 2007; 75,
Quandt, RE. The estimation of the parameters of a linear regression system obeying two separate regimes. J Am Stat Assoc; 1958; 53,
Tibshirani, R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol); 1996; 58,
Wang D, Zhao Z (2022) Optimal change-point testing for high-dimensional linear models with temporal dependence
Zhang, B; Geng, J; Lai, L. Multiple change-points estimation in linear regression models via sparse group lasso. IEEE Trans Signal Process; 2015; 63,
Zhou, H; Li, L; Zhu, H. Tensor regression with applications in neuroimaging data analysis. J Am Stat Assoc; 2013; 108,
© The Author(s) under exclusive licence to Sociedad de Estadística e Investigación Operativa 2024.