Looking at Reinforcement Learning, there are two kinds of action space, namely discrete and continuous. The continuous action space represents the continuous movement a robot can have when actuating. I was bias towards the continuous one at the time of having the idea to write the DDPG implementation. I find that the continuous one can provide a smoother movement, which may be benefitial to control robotic actuator. DDPG is an approach to do so. The source code is available here https://github.com/samuelmat19/DDPG-tf2
My implementation of DDPG based on paper https://arxiv.org/abs/1509.02971, but also highly inspired by https://spinningup.openai.com/en/latest/algorithms/ddpg.html . This implementation is simple and can be used as a boilerplate for your need. It also modifies a bit the original algorithm which mainly aims to speed up the training process. I would highly recommend to use Spinning Up library as it provides more algorithm options. This repository is suitable if direct modification to Tensorflow 2 model or simple training API is favorable.
Several videos of proof-of-concepts are as such:
Reinforcement learning is important when it comes to real environment. As there is no definite right way to achieve a goal, the AI can be optimized based on reward function instead of continuously supervised by human.
In continuous action space, DDPG algorithm shines as one of the best in the field. In contrast to discrete action space, continuous action space mimics the reality of the world.
The original implementation is in PyTorch. Additionally, there are several modifications of the original algorithm that may improve it.
As mentioned above, there are several changes with different aims:
pip3 install ddpg-tf2
ddpg-tf2 --train True --use-noise True
After every epoch, the network's weights will be stored in the checkpoints directory defined in common_definitions.py
.
There are 4 weights files that represent each networks, namely critic network,
actor network, target critic, and target actor.
Additionally, TensorBoard is used to track the resultive losses and rewards.
The pretrained weights can be retrieved from these links:
Testing is done by the similar executable, but with specific parameters as such. If the weight is available in the checkpoint folder, it will load the weight automatically from there.
ddpg-tf2 --train False --use-noise False
To contribute to the project, these steps can be followed. Anyone that contributes will surely be recognized and mentioned here!
Contributions to the project are made using the "Fork & Pull" model. The typical steps would be:
git commit -m "my message"
push
to your GitHub account: git push origin
This open-source project is licensed under MIT License.