-
### Describe the bug
I guess its a bug?
(using a custom environment)
I had an environment that is old API compatible (e.g reset returns just obs instead of obs,{} etc...), and registered through g…
-
Hi everyone,
I'm interested in how the environment in this project performs under other reinforcement algorithms. But I'm new to reinforcement learning and not yet capable of implementing other reinf…
-
## Steps to Reproduce
1. flutter channel dev
2. flutter upgrade
3. Create a new project
4. add webview_flutter to your dependencies
dependencies:
webview_flutter: ^0.3.14+1
5. copy the…
-
Environment:
- Docker with Ubuntu 16.04 & Python 3.6.10 & Ray 0.8.5
- Population Based Scheduler alongside with PPO
- Custom OpenAI gym of mine, registered within RLLib
Hello everyone,
First o…
-
## Steps to Reproduce
1. Run app for iOS simulator device
2. Get the following message in the Debug Console:
```bash
Launching lib/main.dart on iPhone 11 Pro Max in debug mode...
Xcode build …
-
### Improvement 🔧
I took a look at the new version of **OpenAI gym**, which is not backwards compatible because it changes its dimension in the return of the step method.
It turns out that they …
-
Hi repository owners @araffin @vwxyzjn!
I would like to ask you guys a few questions regarding the flow of the setup of the environment in your train_ppo.py script.
Let me paste it here for eas…
-
I'm trying to use [examples/python/learning_stable_baselines.py](https://github.com/mwydmuch/ViZDoom/tree/master/examples/python/learning_stable_baselines.py). This example has some questions. By the …
-
I notices that the related work in your group are developed based on ray framework. Can you please compare the advantages or disadvantages between pymarl and ray when using metadrive?
I am new to R…
-
A little explannation about what is transformer:
https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#:~:text=The%20Transformer%20is%20a%20deep,as%20translation%20and%20text%20summari…