Open michal-stolarz opened 1 year ago
Hello! I apologize for the delay in responding to your question.
When carrying out the experiments, unfortunately, I did not collect simulation images and data... As this is a reinforcement learning algorithm, where the agent learns by interacting with the environment in real-time, there was no point in collecting and storing information for training.
Training can be done through the simulator SimDRLSR. Currently, the simulator has facial emotions: happy, sad, fear, disgust, surprise, anger and neutral. However, emotions are currently aggregated into groups of positive and negative emotions, in addition to using neutral emotions. This approach aims to simplify the mapping of probability tables of human-robot interaction.
The idea is that these emotions affect human behavior, given from the tables I mentioned. Example:
The SimSDRLSR simulator aims to automate the behavior of the human avatar and also provide mechanisms for RL algorithms to act and capture the states of the environment.
Sorry, I can't help with the database. But I hope I have given you some helpful information.
I find the project really interesting and would like to say thank you for making it public! I would like to run it without a simulation, namely on the dataset collected from the simulation. I would like to ask if it would be possible to share the dataset collected from the simulation that was used to evaluate the proposed SocialDQN. If not, it would be very helpful to know how different human emotional expressions can be simulated in the simDRLSR simulator. Thank you in advance!