NicolaBernini / PapersAnalysis

Analysis, summaries, cheatsheets about relevant papers
21 stars 4 forks source link

Reinforcement Learning, Fast and Slow #20

Open NicolaBernini opened 5 years ago

NicolaBernini commented 5 years ago

Overview

Paper Readthrough related to the original paper

Reinforcement Learning, Fast and Slow

Index

NicolaBernini commented 5 years ago

DRL

Paper Key Points

The comparison between Human and current DRL alogs shows there is a huge difference in terms of samples efficiency (how many samples are needed to achieve a certain performance): humans learn way faster than current DRL Algos so there are many interesting scientific questions here:

Learning speed is an important limiting factor to overcome in order to be able to move DRL outside of the niche of games, in more realistic situations

Current DRL Algos Learning Performance

To attain expert human-level performance on tasks such as Atari video games or chess, deep RL systems have required many orders of magnitude more training data than human experts themselves [22]. The critique is indeed applicable to the first wave of deep RL methods, reported beginning around 2013 (e.g., [25]). However, even in the short time since then, important innovations have occurred in deep RL research, which show how the sample efficiency of deep RL can be dramatically increased.

DRL Learning

Optimization Approach

How to plan

State Space

NicolaBernini commented 5 years ago

Slow DRL

Source of slow learning

Gradient based methods

Inductive Bias: Generality vs Learning Speed Trade-off

NicolaBernini commented 4 years ago

Definitions

Samples Efficiency

Sample efficiency refers to the amount of data required for a learning system to attain any chosen target level of performance.

Policy Learning

NicolaBernini commented 4 years ago

High Level Analysis

image

NicolaBernini commented 4 years ago

Tasks Complexity

image

img1

NicolaBernini commented 4 years ago

Episodic Memory

image