-
Hi,
Does this work with vectorized environments (such as using gym.vector.AsyncVectorEnv function)?
Best,
Raymond
-
Thanks for creating this easy to use environment for urban scenarios.
I wanted to use this environment for multi agent learning. Currently, single agent learning is supported. Are there any plans for…
-
## 🚀 Feature
Implementation of PPO RL algorithm
### Motivation
As brought up in issue [186 ](https://github.com/PyTorchLightning/pytorch-lightning-bolts/issues/186), the RL section of bolts cur…
-
**This issue is meant to be updated as the list of changes is not exhaustive**
Dear all,
Stable-Baselines3 beta is now out :tada: ! This issue is meant to reference what is implemented and what …
-
Hi all,
I was wondering if you guys would be interested in adding an async Rainbow DQN that fits into this framework. I would like to contribute towards such a thing and ideally have it be there f…
-
_Problem description:_
Suppose we can instantiate several environment simulators with predefined dynamics (source, or train tasks) and an instance of environment with slightly modified dynamics (targ…
-
### New Issue Checklist
- [√ ] I have read the [Contribution Guidelines](https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md)
- [√ ] I searched for [existing GitHub issues](http…
-
Hi @hsahovic ,
I have been working with the Poke-Env environment for a couple of months as the experimental basis for my Bachelor's Thesis; I very much appreciate the work done here as it's a novel…
-
## Summary
Implementation of more RL algorithms.
## Motivation
The only out-of-the-box algorithm provided in v0.1 is the DQN algorithm, and it still leaves something to be desired as there is…
-
Hi, thank you for your great work!!
I'm interested in contributing to Stable-Baselines3.
I want to implement SAC-Discrete([paper](https://arxiv.org/abs/1910.07207), [my implementation](https://git…