PBCS: Efficient Exploration and Exploitation Using a Synergy Between Reinforcement Learning and Motion Planning - Systèmes robotiques Conception et Commande
Communication Dans Un Congrès Année : 2020

PBCS: Efficient Exploration and Exploitation Using a Synergy Between Reinforcement Learning and Motion Planning

Nicolas Perrin
Olivier Sigaud

Résumé

The exploration-exploitation trade-off is at the heart of reinforcement learning (RL). However, most continuous control benchmarks used in recent RL research only require local exploration. This led to the development of algorithms that have basic exploration capabilities, and behave poorly in benchmarks that require more versatile exploration. For instance, as demonstrated in our empirical study, state-of-the-art RL algorithms such as DDPG and TD3 are unable to steer a point mass in even small 2D mazes. In this paper, we propose a new algorithm called “Plan, Backplay, Chain Skills” (PBCS) that combines motion planning and reinforcement learning to solve hard exploration environments. In a first phase, a motion planning algorithm is used to find a single good trajectory, then an RL algorithm is trained using a curriculum derived from the trajectory, by combining a variant of the Backplay algorithm and skill chaining. We show that this method outperforms state-of-the-art RL algorithms in 2D maze environments of various sizes, and is able to improve on the trajectory obtained by the motion planning phase.

Mots clés

Fichier principal
Vignette du fichier
article.pdf (1.41 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03080918 , version 1 (08-10-2024)

Identifiants

Citer

Guillaume Matheron, Nicolas Perrin, Olivier Sigaud. PBCS: Efficient Exploration and Exploitation Using a Synergy Between Reinforcement Learning and Motion Planning. Artificial Neural Networks and Machine Learning – ICANN 2020, Sep 2020, Bratislava, Slovakia. pp.295-307, ⟨10.1007/978-3-030-61616-8_24⟩. ⟨hal-03080918⟩
82 Consultations
4 Téléchargements

Altmetric

Partager

More