The pepper social scenarios is implemented using ml-agents. It is still under development. This repo is provided for the Paper Social Behavior Learning with Realistic Reward Shaping. Please do not hasitate to contact me if there are issues, you can let me know by posting them in the issues section.
Tested Unity version: 2018.1.0b13 (beta) Tested Unity ML-Agents version: 0.3.1b
Pepper robot approaches people: This environment trains pepper robot to approach a group from different angles.
Approaching from the left and right side by taking care of personal, social and public space (red circles represent the personal spaces of the agents). Learned policy can enable robot to approach from any point in the space.:
The TensorflowSharp plugins folder was omitted from this project due to the massive file sizes. You will need to import this set of Unity plugins yourself. You can download the TensorFlowSharp plugin as a Unity package here.
We strongly recommend users to get familiar with Unity ML-agent.
We recommend using a python virtual environment to manage Python dependencies. For this we recommend using Anaconda, a powerful virtual environment and package management tool.
The Unity game engine is required. Linux installation download link
(Optional) Vision module can be found here.
ml-agents/python/
directory run conda create -n myenv python=3.6
.source activate myenv
requirements.txt
by running pip install -r requirements.txt
requirements.txt
, please install the dependence using pip install grpcio
..NET 4.x Equivalent
inside File-> Build Setting-> PlayerSettings -> Other Settings -> Scripting Runtime Version.ENABLE_TENSORFLOW
inside File-> Build Setting-> PlayerSettings -> Other Settings -> Scripting Define Symbols.Brain
s are set to external in the inspector.Unity Editor
to open the project folder. Then use Ctrl+o
to open scene file by following the path PepperSocial/Assets/Scenarios/PepperSocial/PepperSocial.unity
.<environmentName>_Data/
and <environmentName>.x86_64
We strongly recomend to move these files inside an environments/
directory inside of the ml-agents python/
directory. Such that we get:
python/environments/<environmentName>_Data/
and python/environments/<environmentName>.x86_64
Inside of the ml-agents/python/
directory, run the command:
python learn.py environments/<environmentName>.x86_64 --train
We use branches to keep the experiments clean: The following table shows configerations and their corresponding branches.
Configeratures | Branch |
---|---|
Vector + LSTM (Baseline) | [Link] |
CameraOnly + SAEV + FF | [Link] |
CameraOnly + SAEV + LSTM | [Link] |
CameraOnly + conv + FF | [Link] |
CameraOnly + conv + LSTM | [Link] |
CameraSpeed + SAEV + FF | [Link] |
CameraSpeed + SAEV + LSTM | [Link] |