hill-a / stable-baselines

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
http://stable-baselines.readthedocs.io/
MIT License
4.16k stars 725 forks source link

Custom model for multisensor environments #631

Closed Benatti1991 closed 4 years ago

Benatti1991 commented 4 years ago

Hello everyone, I would like to create an algorithm to train multi sensor agent using your DRL framework. What I have in mind is concatenating one or more convolutional layers whose input could be cameras or lidar sensor with 1D arrays from other sensors (such as GPS). It looks like I should add an option to inputs.py and a custom model to manage this kind of environments. Would this be enough? Do you have any suggestion? Thanks, Simone

araffin commented 4 years ago

Related to #133

Miffyli commented 4 years ago

arrafin beat me to it again <.<.

Just for more direct link: Here is an example on how to combine visual observation with 1D vector: https://github.com/hill-a/stable-baselines/issues/133#issuecomment-561805417

Benatti1991 commented 4 years ago

Araffin, Miffly, thank you for your answers. Actually I knew about the issue you linked, but I opened a new one for two reasons

Miffyli commented 4 years ago

As discussed in #133, true multi-modal observations are not currently possible and you have to resort to this kind of dirty hacks for now. However this is the very next thing on the to-do list after TF2 support, which is slowly getting there but currently on a hiatus due to holidays :)

araffin commented 4 years ago

Closing in favor of #133