JPedroRBelo / SocialDQN

Deep Q-Network for Social Robotics
GNU General Public License v3.0
2 stars 1 forks source link

SocialDQN: Deep Q-Network for Human-Robot Interaction and Social Signals

SocialDQN is a deep reinforcement learning system based on Deep Q-Learning (DNQ) for robots that directly interact with humans. Currently, the main objective of SocialDQN is to allow the robot to learn to identify human interactive behaviors and, from this, to exercise socially acceptable actions.

This project is based on the work of Qureshi et al (2016)[1], which involves modeling the Multimodal Deep Q-Network for Social Human-Robot Interaction (MDQN). Unlike MDQN, SocialDQN was developed in Python 3.8 and has support for social signals (emotions, focus of attention, visible human face), additional rewards, and support for training and validation in the SimDRLSR (Deep Reinforcement Learning Simulator for Social Robotics) simulator.

Actions

States:

Reward:

How to use SocialDQN and SimDRLSR

SocialDQN

git clone git@github.com:JPedroRBelo/SocialDQN.git

Under construction...

Models

https://drive.google.com/drive/folders/1OqJ09NYZXrRQY2g_Ph-M7mOZtRxYMuA8?usp=sharing

[1] Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa and Hiroshi Ishiguro, "Robot gains social intelligence through Multimodal Deep Reinforcement Learning", Proceedings of IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 745-751, Cancun, Mexico 2016.