Content area

Abstract

In multi-period algorithmic trading, determining algorithms that are ideal for riskaverse strategies is a challenging task. This study explored the application of model-free reinforcement learning (RL) in algorithmic trading and analyzed the relationship between risk-averse strategies and implementation of RL algorithms including Q-learning, Greedy-GQ, and SARSA. The data for this quantitative research included one year of the E-mini-NASDAQ-100 futures (2023-2024). Over 7,500 simulation results substantiated a proof of concept that Q-learning can successfully generate risk-adjusted trading signals in the highly liquid technology-focused futures market. With an optimized configuration of hyperparameters including look-back period, basis and reward functions, Q-learning delivered nearly twice the returns of the competing RL algorithms. Beyond absolute returns, Q-learning exhibited lower volatility across key risk metrics and outperformed the NASDAQ-100 benchmark by approximately 75 percentage points. These findings suggest reinforcement learning as a promising artificial intelligence and machine learning framework for alpha generating strategies in systematic trading.

Details

Title
Reinforcement Learning for Algorithmic Trading in Financial Markets
Author
Gityforoze, Soheil
Publication year
2025
Publisher
ProQuest Dissertations & Theses
ISBN
9798288882517
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
3233860065
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.