Content area

Abstract

This paper compares two deep reinforcement learning approaches for cyber security in software defined networking. Neural Episodic Control to Deep Q-Network has been implemented and compared with that of Double Deep Q-Networks. The two algorithms are implemented in a format similar to that of a zero-sum game. A two-tailed T-test analysis is done on the two game results containing the amount of turns taken for the defender to win. Another comparison is done on the game scores of the agents in the respective games. The analysis is done to determine which algorithm is the best in game performer and whether there is a significant difference between them, demonstrating if one would have greater preference over the other. It was found that there is no significant statistical difference between the two approaches.

Details

1009240
Title
Model-Free Deep Reinforcement Learning in Software-Defined Networks
Publication title
arXiv.org; Ithaca
Publication year
2022
Publication date
Sep 3, 2022
Section
Computer Science
Publisher
Cornell University Library, arXiv.org
Source
arXiv.org
Place of publication
Ithaca
Country of publication
United States
University/institution
Cornell University Library arXiv.org
e-ISSN
2331-8422
Source type
Working Paper
Language of publication
English
Document type
Working Paper
Publication history
 
 
Online publication date
2022-09-07
Milestone dates
2022-09-03 (Submission v1)
Publication history
 
 
   First posting date
07 Sep 2022
ProQuest document ID
2711105732
Document URL
https://www.proquest.com/working-papers/model-free-deep-reinforcement-learning-software/docview/2711105732/se-2?accountid=208611
Full text outside of ProQuest
Copyright
© 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2024-06-26
Database
2 databases
  • ProQuest One Academic
  • ProQuest One Academic