Abstract

In multi-agent systems, goal achievement is challenging when agents operate in ever-changing environments and face unseen situations, where not all the goals are known or predefined. In such cases, agents need to identify the changes and adapt their behaviour, by evolving their goals or even generating new goals to address the emerging requirements. Learning and practical reasoning techniques have been used to enable agents with limited knowledge to adapt to new circumstances. However, they depend on the availability of large amounts of data, require long exploration periods, and cannot help agents to set new goals. Furthermore, the accuracy of agents’ actions is improved by introducing added intelligence through integrating conceptual features extracted from ontologies. However, the concerns related to taking suitable actions when unseen situations occur are not addressed. This paper proposes a new Automatic Goal Generation Model (AGGM) that enables agents to create new goals to handle unseen situations and to adapt to their ever-changing environment on a real-time basis. AGGM is compared to Q-learning, SARSA, and Deep Q Network in a Traffic Signal Control System case study. The results show that AGGM outperforms the baseline algorithms in unseen situations while handling the seen situations as well as the baseline algorithms.

Details

Title
Using ontology to guide reinforcement learning agents in unseen situations
Author
Ghanadbashi Saeedeh 1   VIAFID ORCID Logo  ; Golpayegani Fatemeh 2 

 University College Dublin, Room G34, Computer Science Department, Dublin, Ireland (GRID:grid.7886.1) (ISNI:0000 0001 0768 2743) 
 University College Dublin, Room 207, Computer Science Department, Dublin, Ireland (GRID:grid.7886.1) (ISNI:0000 0001 0768 2743) 
Pages
1808-1824
Publication year
2022
Publication date
Jan 2022
Publisher
Springer Nature B.V.
ISSN
0924669X
e-ISSN
1573-7497
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2622095075
Copyright
© The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.