mpatacchiola / dissecting-reinforcement-learning

Python code, PDFs and resources for the series of posts on Reinforcement Learning which I published on my personal blog
https://mpatacchiola.github.io/blog/
MIT License
609 stars 175 forks source link
actor-critic deep-reinforcement-learning dissecting-reinforcement-learning drone-landing genetic-algorithm inverted-pendulum markov-chain mountain-car multi-armed-bandit neural-networks q-learning reinforcement-learning sarsa temporal-differencing-learning

This repository contains the code and pdf of a series of blog post called "dissecting reinforcement learning" which I published on my blog mpatacchiola.io/blog. Moreover there are links to resources that can be useful for a reinforcement learning practitioner. If you have some good references which may be of interest please send me a pull request and I will integrate them in the README.

The source code is contained in src with the name of the subfolders following the post number. In pdf there are the A3 documents of each post for offline reading. In images there are the raw svg file containing the images used in each post.

Installation

The source code does not require any particular installation procedure. The code can be used in Linux, Windows, OS X, and embedded devices like Raspberry Pi, BeagleBone, and Intel Edison. The only requirement is Numpy which is already present in Linux and can be easily installed in Windows and OS X through Anaconda or Miniconda. Some examples require Matplotlib for data visualization and animations.

Posts Content

  1. [Post one] [code] [pdf] - Markov chains. Markov Decision Process. Bellman Equation. Value and Policy iteration algorithms.

  2. [Post two] [code] [pdf] - Monte Carlo methods for prediction and control. Generalised Policy Iteration. Action Values and Q-function.

  3. [Post three] [code] [pdf] - Temporal Differencing Learning, Animal Learning, TD(0), TD(λ) and Eligibility Traces, SARSA, Q-Learning.

  4. [Post four] [code] [pdf] - Neurobiology behind Actor-Critic methods, computational Actor-Critic methods, Actor-only and Critic-only methods.

  5. [Post five] [code] [pdf] - Evolutionary Algorithms introduction, Genetic Algorithm in Reinforcement Learning, Genetic Algorithms for policy selection.

  6. [Post six] [code] [pdf] - Reinforcement learning applications, Multi-Armed Bandit, Mountain Car, Inverted Pendulum, Drone landing, Hard problems.

  7. [Post seven] [code] [pdf] - Function approximation, Intuition, Linear approximator, Applications, High-order approximators.

  8. [Post eight] [code] [pdf] - Non-linear function approximation, Perceptron, Multi Layer Perceptron, Applications, Policy Gradient.

Environments

The folder called environments contains all the environments used in the series. Differently from other libraries (such as OpenAI Gym) the environments are stand-alone python files that do no require any installation procedure. You can use an environment copying the file in the same folder of your project, and then loading it from a Python script: from environmentname import EnvironmentName. The environment can be used following the same convention adopted by OpenAI Gym:

from random import randint #to generate random integers
from inverted_pendulum import InvertedPendulum #importing the environment

#Generating the environment
env = InvertedPendulum(pole_mass=2.0, cart_mass=8.0, pole_lenght=0.5, delta_t=0.1)
#Reset the environment before the episode starts
observation = env.reset(exploring_starts=True) 

for step in range(100):
    action = randint(0, 2) #generate a random integer/action
    observation, reward, done = env.step(action) #one step in the environment
    if done == True: break #exit if the episode is finished

#Saving the episode in a GIF
env.render(file_path='./inverted_pendulum.gif', mode='gif')

The snippet above generate an inverted pendulum environment. The pole is controlled through three actions (0=left, 1=noop, 2=right) that are randomly generated through the randint() method. The maximum number of steps allowed is 100, that with delta_t=0.1 corresponds to 10 seconds. The episode can finish in advance if the pole falls down leading to done = True. Examples for each environments are available here. The following is a description of the available environments with a direct link to the python code:

Resources

Software:

Books and Articles:

License

The MIT License (MIT) Copyright (c) 2017 Massimiliano Patacchiola Website: http://mpatacchiola.github.io/blog

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.