A lightweight library to build and train deep reinforcement learning and custom recurrent networks using Theano+Lasagne
<img src='http://s33.postimg.org/ytx63kwcv/whatis_agentnet_png.png' alt='agentnet structure' title='agentnet structure' width=600 />
No time to play games? Let machines do this for you!
AgentNet is a deep reinforcement learning framework, which is designed for ease of research and prototyping of Deep Learning models for Markov Decision Processes.
All techno-babble set aside, you can use it to train your pet neural network to play games! [e.g. OpenAI Gym]
We have a full in-and-out support for Lasagne deep learning library, granting you access to all convolutions, maxouts, poolings, dropouts, etc. etc. etc.
AgentNet handles both discrete and continuous control problems and supports hierarchical reinforcement learning [experimental].
List of already implemented reinforcement techniques:
As a side-quest, we also provide a boilerplate to custom long-term memory network architectures (see examples).
git clone https://github.com/yandexdataschool/AgentNet.git && cd AgentNet
pip install -r requirements.txt
pip install -e .
On Windows/OSX install Docker Kitematic,
then simply run justheuristic/agentnet
container and click on 'web preview'.
On other linux/unix systems:
docker
daemon is running (sudo service docker start
)[sudo] docker run -d -p 1234:8888 justheuristic/agentnet
A quick dive-in can be found here:
(incomplete) Documentation pages can be found here.
AgentNet also has full embedded documentation, so calling help(some_function_or_object)
or
pressing shift+tab in IPython yields a description of object/function.
A standard pipeline of AgentNet experiment is shown in following examples:
Simple Deep Recurrent Reinforcement Learning setup
Playing Atari SpaceInvaders with Convolutional NN via OpenAI Gym
AgentNet is under active construction, so expect things to change. If you wish to join the development, we'd be happy to accept your help.