Closed Benatti1991 closed 4 years ago
Related to #133
arrafin beat me to it again <.<.
Just for more direct link: Here is an example on how to combine visual observation with 1D vector: https://github.com/hill-a/stable-baselines/issues/133#issuecomment-561805417
Araffin, Miffly, thank you for your answers. Actually I knew about the issue you linked, but I opened a new one for two reasons
As discussed in #133, true multi-modal observations are not currently possible and you have to resort to this kind of dirty hacks for now. However this is the very next thing on the to-do list after TF2 support, which is slowly getting there but currently on a hiatus due to holidays :)
Closing in favor of #133
Hello everyone, I would like to create an algorithm to train multi sensor agent using your DRL framework. What I have in mind is concatenating one or more convolutional layers whose input could be cameras or lidar sensor with 1D arrays from other sensors (such as GPS). It looks like I should add an option to inputs.py and a custom model to manage this kind of environments. Would this be enough? Do you have any suggestion? Thanks, Simone