Dissemin is shutting down on January 1st, 2025

Published in

2010 Latin American Robotics Symposium and Intelligent Robotics Meeting

DOI: 10.1109/lars.2010.24

Links

Tools

Export citation

Search in Google Scholar

Using Transfer Learning to Speed-Up Reinforcement Learning: A Cased-Based Approach

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Reinforcement Learning (RL) is a well-known technique for the solution of problems where agents need to act with success in an unknown environment, learning through trial and error. However, this technique is not efficient enough to be used in applications with real world demands due to the time that the agent needs to learn. This paper investigates the use of Transfer Learning (TL) between agents to speed up the well-known Q-learning Reinforcement Learning algorithm. The new approach presented here allows the use of cases in a case base as heuristics to speed up the Q-learning algorithm, combining Case-Based Reasoning (CBR) and Heuristically Accelerated Reinforcement Learning (HARL) techniques. A set of empirical evaluations were conducted in the Mountain Car Problem Domain, where the actions learned during the solution of the 2D version of the problem can be used to speed up the learning of the policies for its 3D version. The experiments were made comparing the Q-learning Reinforcement Learning algorithm, the HAQL Heuristic Accelerated Reinforcement Learning (HARL) algorithm and the TL-HAQL algorithm, proposed here. The results show that the use of a case-base for transfer learning can lead to a significant improvement in the performance of the agent, making it learn faster than using either RL or HARL methods alone.