Curiosity-driven algorithm for reinforcement learning

One problem of current Reinforcement Learning algorithms is finding a balance between exploitation of existing knowledge and exploration for a new experience. Curiosity exploration bonus has been proposed to address this problem, but current implementations are vulnerable to stochastic noise inside...

Full description

Bibliographic Details
Main Author: Tsybulko, Vitalii
Other Authors: Informaatioteknologian tiedekunta, Faculty of Information Technology, Informaatioteknologia, Information Technology, Jyväskylän yliopisto, University of Jyväskylä
Format: Master's thesis
Language:eng
Published: 2019
Subjects:
Online Access: https://jyx.jyu.fi/handle/123456789/64268
Description
Summary:One problem of current Reinforcement Learning algorithms is finding a balance between exploitation of existing knowledge and exploration for a new experience. Curiosity exploration bonus has been proposed to address this problem, but current implementations are vulnerable to stochastic noise inside the environment. The new approach presented in this thesis utilises exploration bonus based on the predicted novelty of the next state. That protects exploration from noise issues during training. This work also introduces a new way of combining extrinsic and intrinsic rewards. Both improvements help to overcome a number of problems that Reinforcement Learning had until now.