-
Currently, soccer-twos-env relies on mlagents v0.27.0, which serves as a wrapper for the Unity-based soccer-twos game. However, we have observed that mlagents is an unstable dependency that frequently…
-
### Feature Description
Given the capacities of Aerostack2, it would be very suitable to implement a reinforcement learning API with different environments by default to apply RL to control problem…
-
Asynchronous parallel training like A3C is supported by ChainerRL, but synchronous parallel training, where multiple actors interact with their own environments in a synchronous manner, is not support…
-
When {CUDA_VISIBLE_DEVICES='0' catalyst-rl run-trainer --config configs/config.yml} is executed during run-training.sh, catalyst.utils.tools.registry.RegistryException: No factory with name 'CoppeliaS…
-
We are seeing that users will close and re-open the debugger for every change during their development cycle just to make sure everything works with a complete initialization. This is tedious and we s…
-
Hello !
Using the example code used in the [RL tutorial](http://pybrain.org/docs/tutorial/reinforcement-learning.html), I get the following error :
```
xxx/lib/python3.6/site-packages/PyBrain-0.…
-
- [ ] [LlamaGym/README.md at main · KhoomeiK/LlamaGym](https://github.com/KhoomeiK/LlamaGym/blob/main/README.md?plain=1)
# LlamaGym/README.md at main · KhoomeiK/LlamaGym
DESCRIPTION:
Fine-tune LL…
-
Hello,
I am working on an RL project, where I want to use the ACER algorithm on continuous action space problems (Pybullet environments), but I have difficulties implementing it using Your framewor…
-
Hi, thanks for your wonderful sharing. While from your code, in all your learning based algorithms, the total reward calculation is based on the instance_done, which means your reward is only the rewa…
-
Hi, would it be possible for gym-softrobot to be upgraded from gym to gymnasium? Gymnasium is the maintained version of openai gym and is compatible with current RL training libraries ([rllib](https:/…