Content area

Abstract

Reinforcement Learning (RL) has gained substantial attention across diverse application domains and theoretical investigations. Existing literature on RL theory largely focuses on risk-neutral settings where the decision-maker learns to maximize the expected cumulative reward. However, in practical scenarios such as portfolio management and e-commerce recommendations, decision-makers often persist in heterogeneous risk preferences subject to outcome uncertainties, which can not be well-captured by the risk-neural framework. Incorporating these preferences can be approached through utility theory, yet the development of risk-sensitive RL under general utility functions remains an open question for theoretical exploration. In this paper, we consider a scenario where the decision-maker seeks to optimize a general utility function of the cumulative reward in the framework of a Markov decision process (MDP). To facilitate the Dynamic Programming Principle and Bellman equation, we enlarge the state space with an additional dimension that accounts for the cumulative reward. We propose a discretized approximation scheme to the MDP under enlarged state space, which is tractable and key for algorithmic design. We then propose a modified value iteration algorithm that employs an epsilon-covering over the space of cumulative reward. When a simulator is accessible, our algorithm efficiently learns a near-optimal policy with guaranteed sample complexity. In the absence of a simulator, our algorithm, designed with an upper-confidence-bound exploration approach, identifies a near-optimal policy while ensuring a guaranteed regret bound. Finally, we establish a novel theoretical regret lower bound for the risk-sensitive setting, and show that the regret of our algorithm matches this lower bound up to a small polynomial factor

Details

1009240
Business indexing term
Title
Risk-sensitive Markov Decision Process and Learning under General Utility Functions
Publication title
arXiv.org; Ithaca
Publication year
2024
Publication date
Dec 21, 2024
Section
Computer Science; Mathematics
Publisher
Cornell University Library, arXiv.org
Source
arXiv.org
Place of publication
Ithaca
Country of publication
United States
University/institution
Cornell University Library arXiv.org
e-ISSN
2331-8422
Source type
Working Paper
Language of publication
English
Document type
Working Paper
Publication history
 
 
Online publication date
2024-12-24
Milestone dates
2023-11-22 (Submission v1); 2024-12-21 (Submission v2)
Publication history
 
 
   First posting date
24 Dec 2024
ProQuest document ID
2892799011
Document URL
https://www.proquest.com/working-papers/risk-sensitive-markov-decision-process-learning/docview/2892799011/se-2?accountid=208611
Full text outside of ProQuest
Copyright
© 2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2024-12-25
Database
2 databases
  • ProQuest One Academic
  • ProQuest One Academic