Abstract

Limited research exists concerning which machine learning algorithms are best suited to scenarios of strategic interest. Therefore, by comparing the performance of two popular algorithms (genetic and model-based reinforcement learning), this thesis demonstrates which algorithm performs best for particular strategic environments.

To maintain generality, performance is measured along four axes and varying environments. These axes are theoretical guaranteed reward, least iterations to achieve acceptable reward, highest limit of learning, and tolerance against heterogeneous opponent environments. Hence, the environments use 2-person games of varying levels of different opponent strategy. Measurements are obtained by comparing reward versus learning iterations while the algorithms are competing against statically designed opponents.

Experimental results indicate genetic learning outperforms model-based learning in total average payoff while model-based agents reach acceptable reward in less iterations and have better performance guarantees. Neither method seems more nor less affected by increasing opponent heterogeneity.

Details

Title
A game-theoretic comparison of genetic and model-based agents in learning strategic interactions
Author
Buntain, Cody
Year
2010
Publisher
ProQuest Dissertations & Theses
ISBN
978-1-124-04741-6
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
577642340
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.