(ProQuest: ... denotes non-US-ASCII text omitted.)
Quanxin Zhu 1 and Xinsong Yang 2 and Chuangxia Huang 3
Recommended by Nikolaos Papageorgiou
1, Department of Mathematics, Ningbo University, Ningbo 315211, China
2, Department of Mathematics, Honghe University, Mengzi 661100, China
3, The College of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha 410076, China
Received 24 June 2009; Accepted 9 December 2009
1. Introduction
In this paper we study the average reward optimality problem for continuous-time jump Markov decision processes (MDPs) in general state and action spaces. The corresponding transition rates are allowed to be unbounded , and the reward rates may have neither upper nor lower bounds . Here, the approach to deal with this problem is by means of the well-known policy iteration algorithm (PIA)--also known as Howard's policy improvement algorithm.
As is well known, the PIA was originally introduced by Howard (1960) in [1] for finite MDPs (i.e., the state and action spaces are both finite). By using the monotonicity of the sequence of iterated average rewards, he showed that the PIA converged with a finite number of steps. But, when a state space is not finite , there are well-known counterexamples to show that the PIA does not converge even though the action space is compact (see [2-4], e.g.,). Thus, an interesting problem is to find conditions to ensure that the PIA converges. To do this, extensive literature has been presented; for instance, see [1, 5-14] and the references therein. However, most of those references above are concentrated on the case of discrete-time MDPs; for instance, see [1, 5, 11] for finite discrete-time MDPs, [10, 15] for discrete-time MDPs with a finite state space and a compact action set, [13] for denumerable discrete-time MDPs, and [8, 9, 12] for discrete-time MDPs in Borel spaces. For the case of continuous-time models, to the best of our knowledge, only Guo and Hernández-Lerma [6], Guo and Cao [7], and Zhu [14] have addressed this issue. In [6, 7, 14], the authors established the average reward optimality equation and the existence of average optimal stationary policies. However, the treatments in [6, 7] are restricted to only a denumerable state space. In [14] we used the policy iteration approach to study the average reward optimality problem for the case of continuous-time jump MDPs in general state and action spaces. One of the main contributions in [14] is to prove the existence of the average reward optimality equation and average optimal stationary policies. But the PIA is not stated explicitly in [14], and so the value of the average optimal reward value function and an average optimal stationary policy are also not be computed in [14]. In this paper we further study the average reward optimality problem for such a class of continuous-time jump MDPs in general state and action spaces. Our main objective is to use the PIA to compute or at least approximate (when the PIA takes infinitely many steps to converge) the value of the average optimal reward value function and an average optimal stationary policy. To do this, we first use the so-called "drift" condition, the standard continuity-compactness hypotheses, and the irreducible and uniform exponential ergodicity condition to establish the average reward optimality equation and present the PIA. Then under two differently extra conditions we show that the PIA yields the optimal (maximum) reward, an average optimal stationary policy, and a solution to the average reward optimality equation. A key feature of this paper is that the PIA provides an approach to compute or at least approximate (when the PIA takes infinitely many steps to converge) the value of the average optimal reward value function and an average optimal stationary policy.
The remainder of this paper is organized as follows. In Section 2, we introduce the control model and the optimal control problem that we are concerned with. After our optimality conditions and some technical preliminaries as well as the PIA stated in Section 3, we show that the PIA yields the optimal (maximum) reward, an average optimal stationary policy, and a solution to the average reward optimality equation in Section 4. Finally, we conclude in Section 5 with some general remarks.
Notation 1.
If X is a Polish space (i.e., a complete and separable metric space), we denote by [Bernoulli](X) the Borel σ -algebra.
2. The Optimal Control Problem
The material in this section is quite standard (see [14, 16, 17] e.g.,), and we shall state it briefly. The control model that we are interested in is continuous-time jump MDPs with the following form:
[figure omitted; refer to PDF] where one has the following.
(i) S is a state space and it is supposed to be a Polish space.
(ii) A is an action space, which is also supposed to be a Polish space, and A(x) is a Borel set which denotes the set of available actions at state x∈S . The set K:={(x,a):x∈S, a∈A(x)} is assumed to be a Borel subset of S×A .
(iii): q(·|"x,a) denotes the transition rates , and they are supposed to satisfy the following properties: for each (x,a)∈K and D∈[Bernoulli](S) ,
(Q1 ) : D...q(D|"x,a) is a signed measure on [Bernoulli](S) , and (x,a)...q(D|"x,a) is Borel measurable on K ;
(Q2 ) : 0≤q(D|"x,a)<∞ , for all x∉D∈[Bernoulli](S) ;
(Q3 ) : q(S|"x,a)=0, 0≤-q(x|"x,a)<∞ ;
(Q4 ) : q(x):=sup a∈A(x) (-q(x|"x,a))<∞, for all x∈S.
It should be noted that the property (Q3 ) shows that the model is conservative , and the property (Q4 ) implies that the model is stable .
(iv) r(x,a) denotes the reward rate and it is assumed to be measurable on K . (As r(x,a) is allowed to take positive and negative values; it can also be interpreted as a cost rate .)
To introduce the optimal control problem that we are interested in, we need to introduce the classes of admissible control policies.
Let Πm be the family of function πt (B|"x) such that
(i) for each x∈S and t≥0 , B[arrow right]πt (B|"x) is a probability measure on [Bernoulli](A(x)) ,
(ii) for each x∈S and B∈[Bernoulli](A(x)) , t[arrow right]πt (B|"x) is a Borel measurable function on [0,∞) .
Definition 2.1.
A family π=(πt ,t≥0)∈Πm is said to be a randomized Markov policy . In particular, if there exists a measurable function f on S with f(x)∈A(x) for all x∈S , such that πt ({f(x)}|"x)≡1 for all t≥0 and x∈S , then π is called a (deterministic) stationary policy and it is identified with f . The set of all stationary policies is denoted by F .
For each π=(πt ,t≥0)∈Πm , we define the associated transition rates q(D|"x,πt ) and the reward rates r(x,πt ), respectively, as follows.
For each x∈S , D∈[Bernoulli](S) and t≥0 ,
[figure omitted; refer to PDF] In particular, we will write q(D|"x,πt ) and r(x,πt ) as q(D|"x,f) and r(x,f) , respectively, when π:=f∈F .
Definition 2.2.
A randomized Markov policy is said to be admissible if q(D|"x,πt ) is continuous in t≥0 , for all D∈[Bernoulli](S) and x∈S .
The family of all such policies is denoted by Π . Obviously, Π⊇F and so that Π is nonempty. Moreover, for each π∈Π, Lemma 2.1 in [16] ensures that there exists a Q -process--that is, a possibly substochastic and nonhomogeneous transition function Pπ (s,x,t,D) with transition rates q(D|"x,πt ) . As is well known, such a Q -process is not necessarily regular; that is, we might have Pπ (s,x,t,S)<1 for some state x∈S and t≥s≥0 . To ensure the regularity of a Q -process, we shall use the following so-called "drift" condition, which is taken from [14, 16-18].
Assumption A.
There exist a (measurable) function w1 ≥1 on S and constants b1 ≥0 , c1 >0 , M1 >0 and M>0 such that
(1) ∫Sw1 (y)q(dy|"x,a)≤-c1w1 (x)+b1 for all (x,a)∈K ;
(2) q(x)≤M1w1 (x) for all x∈S , with q(x) as in (Q4 ) ;
(3) |r(x,a)|≤Mw1 (x) for all (x,a)∈K .
Remark 2.1 in [16] gives a discussion of Assumption A. In fact, Assumption A(1 ) is similar to conditions in the previous literature (see [19, equation (2.4)] e.g.,), and it is together with Assumption A(3 ) used to ensure the finiteness of the average expected reward criterion (2.5) below. In particular, Assumption A(2 ) is not required when the transition rate is uniformly bounded, that is, sup x∈S q(x)<∞ .
For each initial state x∈S at time s≥0 and π∈Π , we denote by Ps,xπ and Es,xπ the probability measure determined by Pπ (s,x,t,D) and the corresponding expectation operator, respectively. Thus, for each π∈Π by [20, pages 107-109] there exists a Borel measure Markov process {xtπ } (we shall denote {xtπ } by {xt } for simplicity when there is no risk of confusion) with value in S and the transition function Pπ (s,x,t,D) , which is completely determined by the transition rates q(D|"x,πt ) . In particular, if s=0 , we write E0,xπ and P0,xπ as Exπ and Pxπ , respectively.
If Assumption A holds, then from [17, Lemma 3.1 ] we have the following facts.
Lemma 2.3.
Suppose that Assumption A holds. Then the following statements hold.
(a) For each x∈S , π∈Π and t≥0 , [figure omitted; refer to PDF] where the function w1 and constants b1 and c1 are as in Assumption A.
(b) For each u∈Bw1 (S) , x∈S and π∈Π , [figure omitted; refer to PDF]
For each x∈S and π∈Π , the expected average reward V(x,π) as well as the corresponding optimal reward value functions V* (x) are defined as
[figure omitted; refer to PDF]
As a consequence of Assumption A(3 ) and Lemma 2.3(a), the expected average reward V(x,π) is well defined.
Definition 2.4.
A policy π* ∈Π is said to be average optimal if V(x,π* )=V* (x) for all x∈S .
The main goal of this paper is to give conditions for ensuring that the policy iteration algorithm converges.
3. Optimality Conditions and Preliminaries
In this section we state conditions for ensuring that the policy iteration algorithm (PIA) converges and give some preliminary lemmas that are needed to prove our main results.
To guarantee that the PIA converges, we need to establish the average reward optimality equation. To do this, in addition to Assumption A, we also need two more assumptions. The first one is the following so-called standard continuity-compactness hypotheses, which is taken from [14, 16-18]. Moreover, it is similar to the version for discrete-time MDPs; see, for instance, [3, 8, 21-23] and their references. In particular, Assumption B(3 ) is not required when the transition rate is uniformly bounded, since it is only used to ensure the applying of the Dynkin formula .
Assumption B.
For each x∈S ,
(1) A(x) is compact;
(2) r(x,a) is continuous in a∈A(x) , and the function ∫S u(y)q(dy|"x,a) is continuous in a∈A(x) for each bounded measurable function u on S , and also for u:=w1 as in Assumption A;
(3) there exist a nonnegative measurable function w2 on S , and constants b2 ≥0, c2 >0 and M2 >0 such that [figure omitted; refer to PDF] for all (x,a)∈K .
The second one is the irreducible and uniform exponential ergodicity condition. To state this condition, we need to introduce the concept of the weighted norm used in [8, 14, 22]. For the function w1 ≥1 in Assumption A, we define the weighted supremum norm ||·||w1 for real-valued functions u on S by
[figure omitted; refer to PDF] and the Banach space
[figure omitted; refer to PDF]
Definition 3.1.
For each f∈F , the Markov process {xt } , with transition rates q(·|"x,f) , is said to be uniform w1 -exponentially ergodic if there exists an invariant probability measure μf on S such that [figure omitted; refer to PDF] for all x∈S , u∈Bw1 (S) and t≥0 , where the positive constants R and ρ do not depend on f , and where μf (u):=∫S u(y)μf (dy) .
Assumption C.
For each f∈F , the Markov process {xt } , with transition rates q(·|"x,f) , is uniform w1 -exponentially ergodic and λ -irreducible , where λ is a nontrivial σ -finite measure on [Bernoulli](S) independent of f .
Remark 3.2.
(a) Assumption C is taken from [14] and it is used to establish the average reward optimality equation. (b) Assumption C is similar to the uniform w1 -exponentially ergodic hypothesis for discrete-time MDPs; see [8, 22], for instance. (c) Some sufficient conditions as well as examples in [6, 16, 19] are given to verify Assumption C. (d) Under Assumptions A, B, and C, for each f∈F , the Markov process {xt } , with the transition rate q(·|"x,f) , has a unique invariant probability measure μf such that [figure omitted; refer to PDF] (e) As in [9], for any given stationary policy f∈F , we shall also consider two functions in Bw1 (S) to be equivalent and do not distinguish between equivalent functions, if they are equal μf -almost everywhere (a.e.). In particular, if u (x)=0 μf -a.e. holds for all x∈S , then the function u will be taken to be identically zero.
Under Assumptions A, B, and C, we can obtain several lemmas, which are needed to prove our main results.
Lemma 3.3.
Suppose that Assumptions A, B, and C hold, and let f∈F be any stationary policy. Then one has the following facts.
(a) For each x∈S , the function [figure omitted; refer to PDF] belongs to Bw1 (S) , where g(f):=∫S r(y,f)μf (dy) and w1 is as in Assumption A.
(b) (g(f),hf ) satisfies the Poisson equation [figure omitted; refer to PDF] for which the μf -expectation of hf is zero, that is, [figure omitted; refer to PDF]
(c) For all x∈S , |V(x,f)|≤Mb1 /c1 .
(d) For all x∈S , |g(f)|=|V(x,f)|≤Mb1 /c1 .
Proof.
Obviously, the proofs of parts (a) and (b) are from [14, Lemma 3.2 ]. We now prove (c). In fact, from the definition of V(x,f) in (2.5), Assumption A(3 ), and Lemma 2.3(a) we have [figure omitted; refer to PDF] which gives (c). Finally, we verify part (d). Obviously, by Assumption A(3 ) and Assumption C we can easily obtain g(f)=V(x,f) for all x∈S , which together with part (c) yields the desired result.
The next result establishes the average reward optimality equation . For the proof, see [14, Theorem 4.1 ].
Theorem 3.4.
Under Assumptions A, B, and C, the following statements hold.
(a) There exist a unique constant g* , a function h* ∈Bw1 (S) , and a stationary policy f* ∈F satisfying the average reward optimality equation
[figure omitted; refer to PDF] [figure omitted; refer to PDF]
(b) g* =sup π∈Π V(x,π) for all x∈S.
(c) Any stationary policy f∈F realizing the maximum of (3.10) is average optimal, and so f* in (3.11) is average optimal.
Then, under Assumptions A, B, and C we shall present the PIA that we are concerned with. To do this, we first give the following definition.
For any real-valued function u on S , we define the dynamic programming operator T as follows:
[figure omitted; refer to PDF]
Algorithm A (policy iteration)
Step 1 (initialization).
Take n=0 and choose a stationary policy fn ∈F .
Step 2 (policy evaluation).
Find a constant g(fn ) and a real-valued function hfn on S satisfying the Poisson equation (3.7), that is, [figure omitted; refer to PDF] Obviously, by (3.12) and (3.13) we have [figure omitted; refer to PDF]
Step 3 (policy improvement).
Set fn+1 (x):=fn (x) for all x∈S for which [figure omitted; refer to PDF] otherwise (i.e., when (3.15) does not hold), choose fn+1 (x)∈A(x) such that [figure omitted; refer to PDF]
Step 4.
If fn+1 satisfies (3.15) for all x∈S , then stop (because, from Proposition 4.1 below, fn+1 is average optimal); otherwise, replace fn with fn+1 and go back to Step 2.
Definition 3.5.
The policy iteration Algorithm A is said to converge if the sequence {g(fn )} converges to the average optimal reward value function in (2.5), that is, [figure omitted; refer to PDF] where g* is as in Theorem 3.4.
Obviously, under Assumptions A, B, and C from Proposition 4.1 we see that the sequence {g(fn )} is nondecreasing; that is, g(fn )≤g(fn+1 ) holds for all n≥1 . On the other hand, by Lemma 3.3(d) we see that {g(fn )} is bounded. Therefore, there exists a constant g such that
[figure omitted; refer to PDF] Noting that, in general, we have g≤g* . In order to ensure that the policy iteration Algorithm A converges, that is, g=g* , in addition to Assumptions A, B, and C, we need an additional condition (Assumption D (or D[variant prime] ) below).
Assumption D.
There exist a subsequence {hfm } of {hfn } and a measurable function h on S such that [figure omitted; refer to PDF]
Remark 3.6.
(a) Assumption D is the same as the hypothesis H1 in [9], and Remark 4.6 in [9] gives a detailed discussion of Assumption D. (b) In particular, Assumption D trivially holds when the state space S is a countable set (with the discrete topology). (c) When the state space S is not countable , if the sequence {hfn } is equicontinuous, Assumption D also holds.
Assumption D'.
There exists a stationary policy f* ∈F such that [figure omitted; refer to PDF]
Remark 3.7.
Assumption D[variant prime] is the same as the hypothesis H2 in [9]. Obviously, Assumption D[variant prime] trivially holds when the state space S is a countable set (with the discrete topology) and A(x) is compact for all x∈S .
Finally, we present a lemma (Lemma 3.8) to conclude this section, which is needed to prove our Theorem 4.2. For a proof, see [24, Proposition 12.2 ], for instance.
Lemma 3.8.
Suppose that A(x) is compact for all x∈S , and let {fn } be a stationary policy sequence in F . Then there exists a stationary policy f∈F such that f(x)∈A(x) is an accumulation point of {fn (x)} for each x∈S .
4. Main Results
In this section we will present our main results, Theorems 4.2-4.3. Before stating them, we first give the following proposition, which is needed to prove our main results.
Proposition 4.1.
Suppose that Assumptions A, B, and C hold, and let f∈F be an arbitrary stationary policy. If any policy f...∈F such that [figure omitted; refer to PDF] then (a) [figure omitted; refer to PDF]
(b) if g(f)=g(f...) , then [figure omitted; refer to PDF]
(c) if f is average optimal, then [figure omitted; refer to PDF] where h* is as in Theorem 3.4;
(d) if g(f)=g(f...) , then (g(f),hf ) satisfies the average reward optimality equation (3.10), and so f is average optimal.
Proof.
(a) Combining (3.7) and (4.1) we have [figure omitted; refer to PDF] Obviously, taking the integration on both sides of (4.5) with respect to μf... and by Remark 3.2(d) we obtain the desired result.
(b) If g(f)=g(f...) , we may rewrite the Poisson equation for f... as
[figure omitted; refer to PDF] Then, combining (4.5) and (4.6) we obtain [figure omitted; refer to PDF] Thus, from (4.7) and using the Dynkin formula we get [figure omitted; refer to PDF] Letting t[arrow right]∞ in (4.8) and by Assumption C we have [figure omitted; refer to PDF] Now take k:=sup x∈S [hf (x)-hf... (x)] . Then take the supremum over x∈S in (4.9) to obtain [figure omitted; refer to PDF] and so [figure omitted; refer to PDF] which implies [figure omitted; refer to PDF] Hence, from Remark 3.2(e) and (4.12) we obtain (4.3).
(c) Since f is average optimal, by Definition 2.4 and Theorem 3.4(b) we have
[figure omitted; refer to PDF] Hence, the Poisson equation (3.7) for f becomes [figure omitted; refer to PDF] On the other hand, by (3.10) we obtain [figure omitted; refer to PDF] which together with (4.14) gives [figure omitted; refer to PDF] Thus, as in the proof of part (b), from (4.16) we see that (4.4) holds with k[variant prime] :=sup x∈S [hf (x)-h* (x)] .
(d) By (3.7), (4.1), (4.3), and Q3 we have
[figure omitted; refer to PDF] which gives [figure omitted; refer to PDF] that is, [figure omitted; refer to PDF] Thus, as in the proof of Theorem 4.1 in [14], from Lemma 2.3(b), (3.7), and (4.19) we show that f is average optimal, that is, g(f)=g* . Hence, we may rewrite (4.19) as [figure omitted; refer to PDF] Thus, from (4.20) and part (c) we obtain the desired conclusion.
Theorem 4.2.
Suppose that Assumptions A, B, C, and D hold, then the policy iteration Algorithm A converges.
Proof.
From Lemma 3.3(a) we see that the function hfn in (3.13) belongs to Bw1 (S) , and so the function h in (3.19) also belongs to Bw1 (S) . Now let {hfn } be as in Assumption D, and let {hfm } be the corresponding subsequence of {hfn } . Then by Assumption D we have [figure omitted; refer to PDF] Moreover, from Lemma 3.8 there is a stationary policy f∈F such that f(x)∈A(x) is an accumulation point of {fm (x)} for each x∈S ; that is, for each x∈S there exists a subsequence {mi } (depending on the state x ) such that [figure omitted; refer to PDF] Also, by (3.13) we get [figure omitted; refer to PDF] On the other hand, take any real-valued measurable function m on S such that m(x)>q(x)≥0 for all x∈S . Then, for each x∈S and a∈A(x) , by the properties (Q1 )-(Q3 ) we can define P(·|"x,a) as follows: [figure omitted; refer to PDF] Obviously, P(·|"x,a) is a probability measure on S . Thus, combining (4.23) and (4.24) we have [figure omitted; refer to PDF] Letting i[arrow right]∞ in (4.25), then by (3.18), (4.21), and (4.22) as well as the "extension of Fatou's lemma 8.3.7" in [8] we obtain [figure omitted; refer to PDF] To complete the proof of Theorem 4.2, by Proposition 4.1(d) we only need to prove that g, h, and f satisfy the average reward optimality equation (3.10) and (3.11), that is, [figure omitted; refer to PDF] Obviously, from (4.26), and the definition of T in (3.12) we obtain [figure omitted; refer to PDF] The rest is to prove the reverse inequality, that is, [figure omitted; refer to PDF] Obviously, by (3.19) we have [figure omitted; refer to PDF] Moreover, from Lemma 3.3(a) again we see that there exists a constant k such that [figure omitted; refer to PDF] which gives [figure omitted; refer to PDF] Thus, by (4.24), (4.31), (4.32) and the "extension of Fatou's lemma 8.3.7" in [8] we obtain [figure omitted; refer to PDF] which implies [figure omitted; refer to PDF] Also, from (3.7), (3.16), and the definition of T in (3.12) we get [figure omitted; refer to PDF] Letting i[arrow right]∞ in (4.35), then by (3.18), (4.21), (4.22), (4.34), and the "extension of Fatou's lemma 8.3.7" in [8] we obtain [figure omitted; refer to PDF] which gives [figure omitted; refer to PDF] This completes the proof of Theorem 4.2.
Theorem 4.3.
Suppose that Assumptions A, B, C, and D[variant prime] hold, then the policy iteration Algorithm A converges.
Proof.
To prove Theorem 4.3, from the proof of Theorem 4.2 we only need to verify that (4.26) and (4.27) hold true for f* as in Assumption D[variant prime] and some function h in Bw1 (S) . To do this, we first define two functions h,h[variant prime] in Bw1 (S) as follows: [figure omitted; refer to PDF] Then by (3.7) we get [figure omitted; refer to PDF] which together with (4.24) yields [figure omitted; refer to PDF] Applying the "extension of Fatou's Lemma" 8.3.7 in [8] and letting n[arrow right]∞ in (4.40), then by (3.18), (4.38) and Assumption D[variant prime] we obtain [figure omitted; refer to PDF] which implies [figure omitted; refer to PDF] [figure omitted; refer to PDF] Thus, combining (4.42) and (4.43) we get [figure omitted; refer to PDF] Then, from the proof of Proposition 4.1(b) and (4.44) we have [figure omitted; refer to PDF] which together with (4.42), (4.43), and the definition of T in (3.12) gives [figure omitted; refer to PDF] The remainder is to prove the reverse inequality, that is, [figure omitted; refer to PDF] Obviously, by (3.16) and (4.24) we get [figure omitted; refer to PDF] Then, letting n[arrow right]∞ in (4.48), by (4.38), Assumption D[variant prime] , and the "extension of Fatou's Lemma" 8.3.7 in [8], we obtain [figure omitted; refer to PDF] which implies [figure omitted; refer to PDF] and so [figure omitted; refer to PDF] Thus, combining (4.46) and (4.51) we see that (4.47) holds. And so Theorem 4.3 follows.
5. Concluding Remarks
In the previous sections we have studied the policy iteration algorithm (PIA) for average reward continuous-time jump MDPs in Polish spaces. Under two slightly different sets of conditions we have shown that the PIA yields the optimal (maximum) reward, an average optimal stationary policy, and a solution to the average reward optimality equation. It should be mentioned that the approach presented here is different from the policy iteration approach used in [14] because the PIA in this paper provides an approach to compute or at least approximate (when the PIA takes infinitely many steps to converge) the value of the average optimal reward value function and an average optimal stationary policy.
Acknowledgments
The author would like to thank the editor and anonymous referees for their good comments and valuable suggestions, which have helped us to improve the paper. This work was jointly supported by the National Natural Science Foundation of China (10801056), the Natural Science Foundation of Ningbo (201001A6011005) the Scientific Research Fund of Zhejiang Provincial Education Department, K.C. Wong Magna Fund in Ningbo University, the Natural Science Foundation of Yunnan Provincial Education Department (07Y10085), the Natural Science Foundation of Yunnan Provincial (2008CD186), the Foundation of Chinese Society for Electrical Engineering (2008).
[1] R. A. Howard Dynamic Programming and Markov Processes , pp. viii+136, The Technology Press of M.I.T., Cambridge, Mass, USA, 1960.
[2] R. Dekker, "Counter examples for compact action Markov decision chains with average reward criteria," Communications in Statistics , vol. 3, no. 3, pp. 357-368, 1987.
[3] M. L. Puterman Markov Decision Processes: Discrete Stochastic Dynamic Programming , of Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics, pp. xx+649, John Wiley & Sons, New York, NY, USA, 1994.
[4] P. J. Schweitzer, "On undiscounted Markovian decision processes with compact action spaces," RAIRO--Operations Research , vol. 19, no. 1, pp. 71-86, 1985.
[5] E. V. Denardo, B. L. Fox, "Multichain Markov renewal programs," SIAM Journal on Applied Mathematics , vol. 16, pp. 468-487, 1968.
[6] X. P. Guo, O. Hernández-Lerma, "Drift and monotonicity conditions for continuous-time controlled Markov chains with an average criterion," IEEE Transactions on Automatic Control , vol. 48, no. 2, pp. 236-245, 2003.
[7] X. P. Guo, X. R. Cao, "Optimal control of ergodic continuous-time Markov chains with average sample-path rewards," SIAM Journal on Control and Optimization , vol. 44, no. 1, pp. 29-48, 2005.
[8] O. Hernández-Lerma, J. B. Lasserre Further Topics on Discrete-Time Markov Control Processes , vol. 42, of Applications of Mathematics, pp. xiv+276, Springer, New York, NY, USA, 1999.
[9] O. Hernández-Lerma, J. B. Lasserre, "Policy iteration for average cost Markov control processes on Borel spaces," Acta Applicandae Mathematicae , vol. 47, no. 2, pp. 125-154, 1997.
[10] A. Hordijk, M. L. Puterman, "On the convergence of policy iteration in finite state undiscounted Markov decision processes: the unichain case," Mathematics of Operations Research , vol. 12, no. 1, pp. 163-176, 1987.
[11] J. B. Lasserre, "A new policy iteration scheme for Markov decision processes using Schweitzer's formula," Journal of Applied Probability , vol. 31, no. 1, pp. 268-273, 1994.
[12] S. P. Meyn, "The policy iteration algorithm for average reward Markov decision processes with general state space," IEEE Transactions on Automatic Control , vol. 42, no. 12, pp. 1663-1680, 1997.
[13] M. S. Santos, J. Rust, "Convergence properties of policy iteration," SIAM Journal on Control and Optimization , vol. 42, no. 6, pp. 2094-2115, 2004.
[14] Q. X. Zhu, "Average optimality for continuous-time Markov decision processes with a policy iteration approach," Journal of Mathematical Analysis and Applications , vol. 339, no. 1, pp. 691-704, 2008.
[15] A. Y. Golubin, "A note on the convergence of policy iteration in Markov decision processes with compact action spaces," Mathematics of Operations Research , vol. 28, no. 1, pp. 194-200, 2003.
[16] X. P. Guo, U. Rieder, "Average optimality for continuous-time Markov decision processes in Polish spaces," The Annals of Applied Probability , vol. 16, no. 2, pp. 730-756, 2006.
[17] Q. X. Zhu, "Average optimality inequality for continuous-time Markov decision processes in Polish spaces," Mathematical Methods of Operations Research , vol. 66, no. 2, pp. 299-313, 2007.
[18] Q. X. Zhu, T. Prieto-Rumeau, "Bias and overtaking optimality for continuous-time jump Markov decision processes in Polish spaces," Journal of Applied Probability , vol. 45, no. 2, pp. 417-429, 2008.
[19] R. B. Lund, S. P. Meyn, R. L. Tweedie, "Computable exponential convergence rates for stochastically ordered Markov processes," The Annals of Applied Probability , vol. 6, no. 1, pp. 218-237, 1996.
[20] I. I. Gihman, A. V. Skorohod Controlled Stochastic Processes , pp. vii+237, Springer, New York, NY, USA, 1979.
[21] Q. X. Zhu, X. P. Guo, "Markov decision processes with variance minimization: a new condition and approach," Stochastic Analysis and Applications , vol. 25, no. 3, pp. 577-592, 2007.
[22] Q. X. Zhu, X. P. Guo, "Another set of conditions for Markov decision processes with average sample-path costs," Journal of Mathematical Analysis and Applications , vol. 322, no. 2, pp. 1199-1214, 2006.
[23] Q. X. Zhu, X. P. Guo, "Another set of conditions for strong n(n=-1,0) discount optimality in Markov decision processes," Stochastic Analysis and Applications , vol. 23, no. 5, pp. 953-974, 2005.
[24] M. Schäl, "Conditions for optimality in dynamic programming and for the limit of n -stage optimal policies to be optimal," Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete , vol. 32, no. 3, pp. 179-196, 1975.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2009 Quanxin Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
We study the policy iteration algorithm (PIA) for continuous-time jump Markov decision processes in general state and action spaces. The corresponding transition rates are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. The criterion that we are concerned with is expected average reward. We propose a set of conditions under which we first establish the average reward optimality equation and present the PIA. Then under two slightly different sets of conditions we show that the PIA yields the optimal (maximum) reward, an average optimal stationary policy, and a solution to the average reward optimality equation.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer