Robust Multi-agent Patrolling Strategies Using Reinforcement Learning
Affiliation auteurs | !!!! Error affiliation !!!! |
Titre | Robust Multi-agent Patrolling Strategies Using Reinforcement Learning |
Type de publication | Conference Paper |
Year of Publication | 2014 |
Auteurs | Lauri F, Koukam A |
Editor | Siarry P, Idoumghar L, Lepagnot J |
Conference Name | SWARM INTELLIGENCE BASED OPTIMIZATION (ICSIBO 2014) |
Publisher | Univ Haute Alsace, Faculte Sci Tech; ROADEF; GDR MACS |
Conference Location | HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY |
ISBN Number | 978-3-319-12970-9; 978-3-319-12969-3 |
Mots-clés | Extended-GBLA, Multi-agent patrolling, Reinforcement Learning, robustness |
Résumé | Patrolling an environment involves a team of agents whose goal usually consists in continuously visiting the most relevant areas as fast as possible. In this paper, we follow up on the work by Santana et al. who formulated this problem in terms of a reinforcement learning problem, where agents individually learn an MDP using Q-Learning to patrol their environment. We propose another definition of the state space and of the reward function associated with the MDP of an agent. Experimental evaluation shows that our approach substantially improves the previous RL method in several instances (graph topology and number of agents). Moreover, it is observed that such an RL approach is robust as it can efficiently cope with most of the situations caused by the removal of agents during a patrolling simulation. |
DOI | 10.1007/978-3-319-12970-9_17 |