rpatrik96 / AttA2C

Attention-based Curiosity-driven Exploration in Deep Reinforcement Learning
MIT License
25 stars 7 forks source link
a2c atta2c attention attention-mechanism curiosity curiosity-driven curiosity-rl exploration-strategy icm intrinsic-curiosity-module intrinsic-reward openai-gym pytorch pytorch-implementation pytorch-rl rational-curiosity rcm reinforcement-learning reinforcement-learning-algorithms stable-baselines

AttA2C - Attention-based Curiosity-driven Exploration in Deep Reinforcement Learning

Author: Patrik Reizinger, MSc student in Electrical Engineering

Supervisor and Co-author: Márton Szemenyei, lecturer

Organization: Budapest University of Technology and Economics, Department of Control Engineering and Information Technology

Supplementary material for the paper Attention-based Curiosity-driven Exploration in Deep Reinforcement Learning submitted to ICASSP 2020. Preprint available at https://arxiv.org/abs/1910.10840.

Table of contents

General

The aim of the project is to develop new exploration strategies for Reinforcement Learning for agents which can generalize better. The focus is on curiosity-based methods, such as the Intrinsic Curiosity Module of the paper Curiosity-driven Exploration by Self-supervised Prediction, which is used extensively to build upon.

This project is implemented in PyTorch, using the stable-baselines package for benchmarking.

Proposed methods

Results

Experiments were carried out on three Atari games: Breakout, Pong and Seaquest (v0 and v4 variants, the former is stochastic, as it uses action repeat with p=0.25).

Breakout

Pong

Seaquest

Cite

If you found this work useful, please cite the following paper:

@article{reizinger2019attention,
  title={Attention-based Curiosity-driven Exploration in Deep Reinforcement Learning},
  author={Reizinger, Patrik and Szemenyei, M{\'a}rton},
  journal={arXiv preprint arXiv:1910.10840},
  year={2019}
}