1. Introduction
Consider a two-dimensional block-structured discrete-time Markov chain (DTMC) on the state space , where and are sets of non-negative integers and real numbers, respectively. Denote by the Borel -field of the set . The transition probability law is time homogeneous and is characterized by the following transition kernel
where , and . Recall that a two-dimensional function is called a kernel if it is a measurable function in x for each , and a non-negative measure on for each . When , we write to be for simplicity. Note that the kernel function is stochastic in the sense that for all i and all x. The level and phase of each state are respectively represented by the first component i and the second component x. For any , define to be the i level set. Then, the state space E can be decomposed as . For , the corresponding n-step transition kernel is given byTwo different types of block-structured discrete-time Markov chains are the focus of this paper. The first one is the discrete-time GI/M/1-type Markov chain, whose transition kernel matrix is level independent and has the following lower-Hessenberg block form:
(1)
The second one is the discrete-time M/G/1-type Markov chain, whose transition kernel matrix is level independent and has the following upper-Hessenberg block form:
(2)
These block-structured Markov chains are of the special features that the transition of the level is skip-free to the right or skip-free to the left, respectively.
Tweedie [1] proposed the GI/M/1-type Markov chain with a continuous phase set and demonstrated that the positive recurrent GI/M/1-type Markov chain is of the operator-geometric stationary distribution. Thus, Tweedie [1] extended the well-known results for the GI/M/1-type Markov chain with a finite phase set, which was derived by Neuts [2]. Tweedie’s finding was later applied by Breuer [3] to investigate the stationary distribution for the embedded GI/G/k queue with a Lebsegue-dominated inter-arrival time distribution. A positive recurrent tridiagonal block-structured quasi-birth-death (QBD) process with a continuous phase set, as well as a computational framework of its stationary distribution, are investigated by Nielsen and Ramaswami [4]. They also demonstrated the motivation for investigating a model with a continuous phase set. The computational framework was recently extended and improved by Jiang et al. [5] by incorporating the wavelet transform approach.
The GI/M/1-type and M/G/1-type Markov chains with a finite phase set were investigated systematically by Neuts in 1981 [2] and 1989 [6], respectively. Effective solver tools for solving the stationary distribution for these chains were developed by Bini et al. in [7], based on the algorithms collected in [8]. It is known that the matrices R and G are key matrices for solving stationary distributions for GI/M/1-type and M/G/1-type Markov chains, respectively. Since R and G are closely connected by Ramaswami dual and Bright dual, the computation of matrix R for GI/M/1-type chains can be reduced to the computation of matrix G for M/G/1-type Markov chains ([9,10,11,12]). Several effective algorithms have been developed to compute the matrix G, such as functional iteration, Newton iteration, invariant subspace method, cyclic reduction and Ramaswami Reduction. Please refer to [13] for a detailed description of the algorithms.
As far as we know, the following two issues are still not well addressed in the literature:
(i) For a positive recurrent GI/M/1-type Markov chain with a continuous phase set, numerical algorithms for computing the stationary distribution are missing, although the theoretical framework has been established in [1],
(ii) M/G/1-type Markov chains are of the same importance as GI/M/1-type Markov chains. However, both the theoretical and computational framework are missing for M/G/1-type Markov chains with a continuous phase set.
The current research is motivated to investigate the above two issues. This paper is organized into six sections. We provide an overview of DTMCs on a general state space and the wavelet series expansion in two dimensions in Section 2. The GI/M/1-type Markov chains are introduced in Section 3, most of which are well known in the literature [1], except for the computational analysis. The analysis of stationary distributions for M/G/1-type Markov chains is performed in Section 4. Numerical experiments, including a brief description of numerical algorithms and two illustrative examples, are presented in Section 5. Comparisons among different algorithms are executed with respect to the accuracy and speed of calculation. Conclusions are presented in Section 6. Please refer to Table A1 for a summary of frequently used notations.
2. Preliminaries
2.1. Basics about DTMCs on a General State Space
We present some basic concepts for DTMCs on a general state space. Please refer to [14] for more details.
Let be a DTMC on a general state space E endowed with the countably generated -field . Define to be the first return time on A. For a non-negative nontrivial measure , the chain is called -irreducible if there exits a non-negative nontrivial measure , such that is -irreducible, i.e.,
for any , and any , and is a maximal irreducible measure with respect to . A set is called a Harris recurrent if for all . The chain is called a Harris recurrent if it is -irreducible and every set in is Harris recurrent. A Harris recurrent chain has an unique invariant measure such thatA Harris recurrent chain with a finite is said to be Harris positive recurrent. If is Harris positive recurrent and aperiodic, then
which implies that the limit of the transition kernel exists independently of the initial state . In this case, is called the invariant probability measure or the stationary probability distribution.We now introduce the censored Markov chain, which will be used later to deal with the invariant probability distributions for block-structured Harris positive recurrent chains. Let A be a non-empty subset in . Let be the time that successively visits a state in A, i.e., and . The censored Markov chain on A is defined by , whose one-step transition kernel is denoted by , . Define
andWhen starting with and , the censored chain evolves according to the transition law
2.2. Basics about Wavelet in Two Dimensions
This section is concerned on some basics about the wavelet, most of which is taken from [5] directly. Please refer to [15,16] for more details about the wavelet analysis.
Respectively denote the scaling function and the wavelet function by and . For all , let and . Define three wavelets
and for all j in ,Now, we consider the wavelet series expansion of a two-dimensional function. For each , define column vectors , , and . By Lemma 3.1 in [5], any function can be expanded as follows
(3)
where is a column vector, the diagonal blocks of are written as with .Let be a kernel function whose density function is assumed to exist and is denoted by . On the one hand, by performing (3) for the density of the kernel function , which is referred to as the wavelet transform (WT), we can find the matrix , also known as the associated matrix of . On the other hand, for a given associated matrix , we can find the density function by performing (3) in the other side, which is called the inverse wavelet transform (IWT).
As you will see in the following sections, it is crucial to deal with the convolution operations of the transition kernels in order to investigate the stationary distributions. The wavelet transform is introduced to transform these convolution operations into matrix operations by expanding the kernels using the wavelet series. For any , define the convolution of two kernel functions and by
and define recursively by where and for any . If is a signed measure on E, we writeIn order to expand the kernel functions through the wavelet transform, we need the following assumption and theorem, which are both taken from [5].
All the kernel functions and , belong to , where is of finite Lebesgue measure, and is the set of kernel functions having a density function equaling to zeros outside of .
([5]). Let be a sequence of kernel functions in . Denote their density functions by , and their associated matrices by .
(i) For any fixed n, the convolution kernel function is also in , and its associated matrix is ;
(ii) For any fixed n, the additive kernel function is also in and its associated matrix is ;
(iii) If converges to , then the kernel function is also in , and its associated matrix is .
3. -Type Markov Chains
Consider a GI/M/1-type Markov chain , whose transition law P given by (1) satisfies that for any
Define the kernel to be the expected number of visits to , starting from under the taboo set of . From [1], we know that the censored Markov chain of the GI/M/1-type Markov chain on the zero level set has the following transition kernel
The following theorem is taken from [1], which characterizes the invariant probability measure for GI/M/1-type Markov chains with a continuous phase set.
([1]). Suppose that the GI/M/1-type Markov Chain with a continuous phase set is Ψ-irreducible and Harris positive recurrent. Then, its unique stationary probability measure Π, decomposed by , satisfies the following recursive formula
where the kernel is the minimal non-negative solution of the following equation and is uniquely determined byApplying Theorem 2 and Theorem 1, we can obtain the following theorem directly.
Suppose that the GI/M/1-type Markov Chain with a continuous phase set is ψ-irreducible and Harris positive recurrent and that Assumption 1 holds.
(i) The kernels and are in , whose associated matrices are, respectively, denoted by , and .
(ii) The invariant probability measure is in . Let be the associated row vector of , i.e., , where is the density of , and is defined in Section 2.2. Then, we have
and
where and is the vector of 1’s with an appropriate dimension.
4. -Type Markov Chains
In this section, we consider a M/G/1-type Markov Chain , whose transition law P, given by (1), satisfies that for any
Define to be the first return time to the level set for any . For any and any , define the following kernel function
which is independent of i due to the level independent structure of the chain. The first result is about the kernel , which plays a key role in analyzing M/G/1-type Markov chains.Suppose that the M/G/1-type Markov chain is ψ-irreducible. For any , the kernel is the minimal nonnegative solution of the following equation
(4)
where is the i-fold convolution of the kernel itself.We first show that the kernel is a solution of Equation (4).
By conditioning on the state of the first transition, the kernel can be decomposed as follows
(5)
We will use the inductive arguments to show
(6)
Since the chain is level independent, when , we have
for any . Suppose that satisfiesBy conditioning on the state of the first hitting on level and using the strong Markov property, we have
Substituting (6) into (5), we have
where we exchange the order between integration and summation by Fubini theorem.Next we demonstrate that is the minimal non-negative solution of (4). We divide the proof into two steps.
We first define a sequence of kernels by setting , and
Let be any solution of Equation (4). Obviously, . Suppose that , then for . Moreover, we have
(7)
Similarly, if we assume inductively that , we have
and so is monotonically increasing in N. Hence, the limit exists. Further, we haveBy taking the limit of both sides of Equation (6) and using the dominated convergence theorem, we know that the kernel is a solution of (4), i.e.,
We further have that is the minimal solution since .
Next, we need to prove that . Define
Obviously, we know that . By conditioning on the state of the first transition, we have
(8)
Denote
By conditioning on the state of the first return time to level and repeating the same arguments, we have
(9)
By (8) and (9), we can deduce that
Finally, note that , and so from (6), we have by induction . Taking the limit as gives , as required. □
In the following, we will investigate numerical computing issues of the invariant probability distribution for M/G/1-type chains. The key point is to set up the Ramaswami algorithm, a well-known result for M/G/1-type chains with finite phases, for the M/G/1-type chains with continuous phases.
Suppose that the M/G/1-type Markov chain is ψ-irreducible and Harris positive recurrent. Let the unique invariant probability measure be Π with , . Then, the measure Π satisfies the following recursive formula
(10)
where(11)
and is a unique solution of the equation .By (6), we know that for any and , the kernel function is the probability that the Markov chain first returns to level by hitting the state , given that it starts from the state . The transition kernel function of the Markov chain embedded at epochs of visits to the set is given by
We now explain how to determine the transition kernel . The first k block columns of the kernel function are the same as those of , since the chain can only move down by one level at a time. As for the th (i.e., last) block column of , its first entry is as follows.
The equality (11) can be proved in a similar way.
Since this chain is -irreducible and a Harris positive recurrent, for , starting from x, the set M will almost certainly be returned infinitely, and so is the censored Markov chain . Thus, is also -irreducible and a Harris positive recurrent. Let be the unique invariant probability measure of .
Next, we will demonstrate that is also an invariant measure of the censored chain . Define the measure by
By Propostion 10.4.8 in [14], we know that
(12)
and that is invariant measure for . Since is assumed to be a Harris positive recurrent, the invariant measure is unique up to a constant. This shows that for some constant c, from which and (12), and we haveSince , we can obtain
Thus, we have proved that is an invariant measure of . Taking into account the last block equation of , we have
(13)
This proves (10).
To determine , we reset and consider the censored chain , whose transition kernel is given by
By (13), we know that . □
Applying Theorem 5 and performing the wavelet series expansion, we can obtain the following theorem directly.
Suppose that the M/G/1-type Markov chain is ψ-irreducible and Harris positive recurrent and that Assumption 1 holds.
(i) The kernels , and are in , whose associated matrices satisfies that
(ii) The invariant probability measure is in . Let be the associated row vector of . Then, the associated matrices satisfy
(14)
where .(i) We note that the entries in the associated matrices of a kernel function may be negative. Hence, the associated transition kernel matrices and cannot construct a stochastic transition matrix.
(ii) We now consider numerical algorithms for computing the associated matrix . In the literature, several known algorithms, including the functional iteration ([17,18]), Newton iteration ([19]), invariant subspace method ([20]), cyclic reduction ([21]) and Ramaswami Reduction ([22]), have been developed to solve the G-matrix for M/G/1-type chains with a finite phase. For a collection of these algorithms, please refer to [13]. Similar to what we did in Theorems 4 and 5, we can set up the corresponding algorithms for by modifying these algorithms from the finite phase to the general phase. We omit the details in order to avoid tedious presentations.
5. Numerical Experiments
5.1. Discrete Wavelet Transforms
We need to perform discrete wavelet transforms for numerical experiments. Without a loss of generality, we assume that the phase space is taken to be . In the following, we only give a simple presentation of the computation framework; please refer to Section 5 in [5] for more details.
We first consider numerical issues of the M/G/1-type Markov chain with a continuous phase, which are divided into the following steps:
Step 1: Choose appropriate real numbers , and positive integer N. Then, evenly sample N points from the truncated phase space . Performing the DWT in Algorithm 5.1 in [5] to kernels and produces the associated sample matrices and .
Step 2: Solve the the associated sample matrix through the algorithms listed in (ii) of Remark 1, such as functional iteration, Newton iteration, invariant subspace method, cyclic reduction and Ramaswami Reduction.
Step 3: Solve the associated sample invariant probability vector using Theorem 6.
Step 4: Performing the IDWT in Algorithm 5.2 in [5] to the matrix of and the vector produces the kernels and .
Now, we consider numerical issues of GI/M/1-type Markov chains with the continuous phase, which are also divided into four steps. The first step and the last step are the same as that for the M/G/1-type Markov chains. In Step 3, we solve the associated sample invariant probability vector based on Theorem 3. For Step 2, we use the Ramaswami dual to solve the associated sample matrix . It is known that the Ramaswami dual [11] enables us to compute the matrix R for a GI/M/1-type chain with a finite phase in terms of computing the matrix G for a dual M/G/1-type Markov chain. Note that the Ramaswami dual can be modified and extended to the case of M/G/1-type and GI/M/1-type chains with a continuous phase.
5.2. Illustration with Examples
5.2.1. Example 1: An -Type Chain
The Markov chain in this example is modified from Example 2 in [5] by extending the tri-diagonal structure to the more general upper-Hessenberg setting.
Denote by the arrival times of a Poisson process with parameter . Let . Define a sequence of i.i.d. random variables , which are distributed with
where are non-negative constants such that . We define(15)
(16)
where .Let and , then is a M/G/1-type chain, whose phase space is . Its transition kernels are derived as
(17)
and finally(18)
The marginal invariant probability measures for the -type chain has analytical expressions, as follows:
(19)
where and are, respectively, the level and phase marginal invariant probability measures.Take , , . From (19), we can have the following exact value of the marginal level stationary probabilities
Evenly take 256 samples on as values of x and 256 samples on as values of y, and choose the Haar wavelet for the wavelet transform. Figure 1 presents the numerical solutions of kernel functions . The marginal distributions are obtained numerically based on the solved by functional iteration. The numerical solutions for level marginal distribution and phase marginal distribution are, respectively, shown in Figure 2 and Figure 3, together with the corresponding analytical solutions. For each method used to derive , we calculate its mean absolute error defined as , where and is the numerical solution of the marginal level stationary probability at level k. (We take because values of when are small enough to be considered as negligible.) In this example, different methods of solving matrix G lead to the same numerical solutions of level and phase marginal distributions. According to Figure 4, performances of various methods are similar in the sense of accuracy and computational time.
5.2.2. Example 2: A -Type Chain
Consider a first-come first-served single server queuing system, which was considered by [1] for the theoretical analysis of the invariant probability distribution. Here, we consider the computational issue. In this queue, the service times and interarrival times are distributed with general distribution functions and . We assume that both the mean arrival interval and the mean service time are finite.
Let be the number of customers right before the arrival time of the nth customer and let be the remaining service time just after the nth arrival. Let be the departure time of the nth customer, and let be the remaining service time at time t of a customer who is receiving the service. Write
Then, is a GI/M/1-type Markov chain with discrete levels and continuous phases, whose transition probabilities are given by (1) with (see [1])
(20)
(21)
From [1], we know that if , then has an invariant probability measure with
where the constant d is given byTo illustrate our algorithm, we would like to compare numerical solutions for the level of marginal distribution with its analytical value. For numerical calculation, let be uniformly distributed in the interval , i.e., , and let the service time be exponentially distributed with parameter , i.e., . Then we have, for
and, forThe kernels and are calculated as follows. For , by (20)
For
and forFor , by (21) we have
Since and , the queuing system is stable if
It is well known that the level marginal invariant probability distribution is
where c is the solution of on the interval . Since , thenFor numerical experiments, we take . The constant c is solved to be approximately 0.2885. The exact marginal level distribution can be obtained. On the other hand, we can perform the numerical algorithm in the previous section to approximate . We may then compare the numerical results to the analytical results, and provide a verification of the algorithm afterward. Here, we do not consider the marginal phase distribution, since its closed form cannot be obtained.
Evenly take 256 samples on as values of x and 256 samples on as values of y, and choose the Haar wavelet for a wavelet transform. The numerical solutions of kernel functions and are shown in Figure 5. The level marginal distribution of this queuing system could be computed by a previous algorithm. We show the numerical solutions using solved by functional iteration, together with the analytical solutions in Figure 6. Among the five numerical methods mentioned in (ii) of Remark 1, we note that the method of the invariant subspace does not work during the run of the algorithm, which may be caused by the fact that some matrices are not invertible. With numerical solutions of marginal invariant probability distributions for the level and phase, we can further estimate the mean and variance of the queue length and the remaining service time, which are listed in Table 1. This implies practical uses of invariant probability distributions.
Since we are not able to obtain analytical solutions of marginal invariant probability distributions for the phase, only the analytical mean and variance for the level are presented in Table 1. When using the Ramaswami reduction, the mean queue length is the most accurate among four methods, but the variance of the queue length is not close to the analytical variance. However, the mean and variance solved using the functional iteration are both relatively accurate.
We compare properties of the other four methods according to their mean absolute errors and speeds of computation. From Figure 7 and Table 1, the functional iteration performs the best among all the four methods, since it is the fastest and also the most accurate. When we raise the sample size from 256 to 512, it takes 20.98 s to solve using a functional iteration. The computational times of the cyclic reduction, Newton iteration and Ramaswami reduction are 116.35 s, 1997.53 s and 2191.45 s, respectively. The differences between the mean absolute errors of numerical solutions and the errors when using a sample size of 256, however, are only around .
6. Conclusions
For invariant probability measures of Harris positive recurrent -type or -type Markov chains with discrete levels and a general phase set, we establish wavelet-based computational frameworks in this paper. A theoretical analysis framework is also established for -type Markov chains. These results extend the known findings in [4,5] for QBD processes to the current more general block-structured Markov chains. Numerical experiments support the effectiveness of our numerical algorithms based on DTWC. An interesting observation in Example 2 is that among the adopted five algorithms for G-matrix, the functional iteration performs the best, but the invariant subspace may fail.
For future research, it is interesting to consider block-structured continuous-time Markov processes with discrete levels and continuous phases. In this case, the processes should be presented in terms of the extended generators. It is expected that the research is more challenging when setting up these models and preforming the theoretical and numerical analysis of their invariant probability measures.
Methodology, Y.L.; Software, S.J.; Writing—original draft, S.J., Y.L. and N.L.; Writing—review and editing, Y.L and N.L.; Visualization, S.J. and N.L.; Funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.
No data is used in this study.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Level marginal invariant probability distribution [Forumla omitted. See PDF.].
Figure 3. Phase marginal invariant probability distribution [Forumla omitted. See PDF.].
Figure 4. Difference among methods on level marginal invariant probability distribution [Forumla omitted. See PDF.]. The legend of the first plot includes mean absolute errors, and the legend of the second plot includes computational times. (FI: functional iteration, CR: cyclic reduction, NI: Newton iteration, RR: Ramaswami Reduction and IS: invariant subspace).
Figure 5. Numerical solutions of kernel functions. The right picture is about the kernel [Forumla omitted. See PDF.], and the left is about its dual kernel [Forumla omitted. See PDF.].
Figure 6. Level marginal invariant probability distribution [Forumla omitted. See PDF.].
Figure 7. Difference among methods on level marginal invariant probability distribution [Forumla omitted. See PDF.]. The legend of the first plot includes mean absolute errors, and the legend of the second plot includes computational times. (FI: functional iteration, CR: cyclic reduction, NI: Newton iteration and RR: Ramaswami Reduction).
Mean and variance of level and phase.
Queue Length (Level) | Remaining Service Time (Phase) | |||
---|---|---|---|---|
Method | Mean | Variance | Mean | Variance |
FI | 0.3415 | 0.3680 | 0.3125 | 0.1127 |
CR | 0.6964 | 1.3918 | 0.3345 | 0.1255 |
NI | 0.6964 | 1.3918 | 0.3345 | 0.1255 |
RR | 0.3911 | 0.9596 | 0.4385 | 0.1641 |
Analytical | 0.4054 | 0.5698 | - | - |
Appendix A
Summary of frequently used notations.
Notation | Description |
---|---|
E | State space of a Markov chain |
|
The i level set of Markov chain |
|
The first return time to the level set |
|
Transition kernel matrix of |
|
Transition kernel matrix of |
|
One-step transition kernel of a censored Markov chain on set A |
|
Invariant probability measure |
|
Associated matrix of kernel function |
|
Set of kernel functions having a density function equaling to zeros outside of |
References
1. Tweedie, R.L. Operator-geometric stationary distributions for Markov chains, with application to queueing models. Adv. Appl. Probab.; 1982; 14, pp. 368-391. [DOI: https://dx.doi.org/10.2307/1426527]
2. Neuts, M.F. Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach; Johns Hopkins University Press: Baltimore, MD, USA, 1981.
3. Breuer, L. Transient and stationary distributions for the GI/G/k Queue with Lebesgue-dominated inter-arrival time distribution. Queueing Syst.; 2003; 45, pp. 47-57. [DOI: https://dx.doi.org/10.1023/A:1025643801208]
4. Nielsen, B.F.; Ramaswami, V. A Computational Framework for a Quasi Birth and Death Process with a Continuous Phase Variable; Ramaswami, V.; Wirth, P. ITC 15 Elsevier: Amsterdam, The Netherlands, 1997; pp. 477-486.
5. Jiang, S.; Latouche, G.; Liu, Y. Wavelet transform for quasi-birth-death process with a continuous phase set. Appl. Math. Comput.; 2015; 252, pp. 354-376. [DOI: https://dx.doi.org/10.1016/j.amc.2014.12.023]
6. Neuts, M.F. Structured Stochastic Matrices of M/G/1 type and Their Applications; Marcel Dekker: New York, NY, USA, 1989.
7. Bini, D.A.; Meini, B.; Steffé, S.; Van Houdt, B. Structured Markov chains solver: Software tools. Proceedings of the SMCTOOLS; Pisa, Italy, 10 October 2006.
8. Bini, D.A.; Meini, B.; Steffé, S.; Van Houdt, B. Structured Markov chains solver: Algorithms. Proceedings of the SMCTOOLS; Pisa, Italy, 10 October 2006.
9. Bright, L.W. Matrix-Analytic Methods in Applied Probability; The University of Adelaide: Adelaide, Australia, 1996.
10. Ramaswami, V. Nonlinear matrix equations in applied probability-solution techniques and open problems. Siam Rev.; 1988; 30, pp. 256-263. [DOI: https://dx.doi.org/10.1137/1030046]
11. Ramaswami, V. A duality theorem for the matrix paradigms in aueueing theory. Commun. Stat. Stoch. Model.; 1990; 6, pp. 151-161. [DOI: https://dx.doi.org/10.1080/15326349908807141]
12. Taylor, P.G.; Van Houdt, B. On The dual relationship between Markov chains of GI/M/1 and M/G/1 type. Adv. Appl. Probab.; 2010; 42, pp. 210-225. [DOI: https://dx.doi.org/10.1239/aap/1269611150]
13. Bini, D.A.; Latouche, G.; Meini, B. Numerical Methods for Structured Markov Chains; Oxford University Press: New York, NY, USA, 2005.
14. Meyn, S.P.; Tweedie, R.L. Markov Chains And Stochastic Stability; 2nd ed. Cambridge University Press: Cambridge, UK, 2009.
15. Stéphane, M. A Wavelet Tour of Signal Processing; Academic Press: New York, NY, USA, 2009.
16. Daubechies, I. Orthonormal base of compactly supported wavelets. Commun. Pure Appl. Math.; 1988; 41, pp. 909-996. [DOI: https://dx.doi.org/10.1002/cpa.3160410705]
17. Favati, P.; Meini, B. Relaxed functional iteration techniques for the numerical solution of M/G/1 type Markov chains. Bit Numer. Math.; 1998; 38, pp. 510-526. [DOI: https://dx.doi.org/10.1007/BF02510257]
18. Favati, P.; Meini, B. On functional iteration methods for solving nonlinear matrix equations arising in queueing problems. IMA J. Numer. Anal.; 1999; 19, pp. 39-49. [DOI: https://dx.doi.org/10.1093/imanum/19.1.39]
19. Latouche, G. Newton’s iteration for non-linear equations in Markov chains. IMA J. Numer. Anal.; 1994; 14, pp. 583-598. [DOI: https://dx.doi.org/10.1093/imanum/14.4.583]
20. Akar, N.; Sohraby, K. An invariant subspace approach in M/G/1 and G/M/1 type Markov chains. Stoch. Model.; 1997; 13, pp. 381-416. [DOI: https://dx.doi.org/10.1080/15326349708807433]
21. Bini, D.A.; Meini, B. On Cyclic Reduction Applied to a Class of Toeplitz-like Matrices Arising in Queueing Problems, Computations with Markov Chains; Springer: Boston, MA, USA, 1995.
22. Ramaswami, V. The generality of Quasi Birth-and-Death processes. Advances in Matrix Analytic Methods for Stochastic Models; Alfa, A.S.; Chakravarthy, S.R. Notable Publications: Branchburg, NJ, USA, 1998; pp. 93-113.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
We consider the computing issues of the steady probabilities for block-structured discrete-time Markov chains that are of upper-Hessenberg or lower-Hessenberg transition kernels with a continuous phase set. An effective computational framework is proposed based on the wavelet transform, which extends and modifies the arguments in the literature for quasi-birth-death (QBD) processes. A numerical procedure is developed for computing the steady probabilities based on the fast discrete wavelet transform, and several examples are presented to illustrate its effectiveness.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 School of Traffic and Logistics, Central South University of Forestry and Technology, Changsha 410004, China
2 Department of Statistics and Probability, Michigan State University, East Lansing, MI 48824, USA
3 School of Mathematics and Statistics, HNP-LAMA, New Campus, Central South University, Changsha 410083, China