This repository contains PyTorch implementations of deep reinforcement learning algorithms and environments.
(To help you remember things you learn about machine learning in general write them in Gizmo)
All implementations are able to quickly solve Cart Pole (discrete actions), Mountain Car Continuous (continuous actions), Bit Flipping (discrete actions with dynamic goals) or Fetch Reach (continuous actions with dynamic goals). I plan to add more hierarchical RL algorithms soon.
Below shows various RL algorithms successfully learning discrete action game Cart Pole
or continuous action game Mountain Car. The mean result from running the algorithms
with 3 random seeds is shown with the shaded area representing plus and minus 1 standard deviation. Hyperparameters
used can be found in files results/Cart_Pole.py
and results/Mountain_Car.py
.
Below shows the performance of DQN and DDPG with and without Hindsight Experience Replay (HER) in the Bit Flipping (14 bits) and Fetch Reach environments described in the papers Hindsight Experience Replay 2018 and Multi-Goal Reinforcement Learning 2018. The results replicate the results found in the papers and show how adding HER can allow an agent to solve problems that it otherwise would not be able to solve at all. Note that the same hyperparameters were used within each pair of agents and so the only difference between them was whether hindsight was used or not.
The results on the left below show the performance of DQN and the algorithm hierarchical-DQN from Kulkarni et al. 2016 on the Long Corridor environment also explained in Kulkarni et al. 2016. The environment requires the agent to go to the end of a corridor before coming back in order to receive a larger reward. This delayed gratification and the aliasing of states makes it a somewhat impossible game for DQN to learn but if we introduce a meta-controller (as in h-DQN) which directs a lower-level controller how to behave we are able to make more progress. This aligns with the results found in the paper.
The results on the right show the performance of DDQN and algorithm Stochastic NNs for Hierarchical Reinforcement Learning (SNN-HRL) from Florensa et al. 2017. DDQN is used as the comparison because the implementation of SSN-HRL uses 2 DDQN algorithms within it. Note that the first 300 episodes of training for SNN-HRL were used for pre-training which is why there is no reward for those episodes.
The repository's high-level structure is:
├── agents
├── actor_critic_agents
├── DQN_agents
├── policy_gradient_agents
└── stochastic_policy_search_agents
├── environments
├── results
└── data_and_graphs
├── tests
├── utilities
└── data structures
To watch all the different agents learn Cart Pole follow these steps:
git clone https://github.com/p-christ/Deep_RL_Implementations.git
cd Deep_RL_Implementations
conda create --name myenvname
y
conda activate myenvname
pip3 install -r requirements.txt
python results/Cart_Pole.py
For other games change the last line to one of the other files in the Results folder.
Most Open AI gym environments should work. All you would need to do is change the config.environment field (look at Results/Cart_Pole.py
for an example of this).
You can also play with your own custom game if you create a separate class that inherits from gym.Env. See Environments/Four_Rooms_Environment.py
for an example of a custom environment and then see the script Results/Four_Rooms.py
to see how to have agents play the environment.