-
Currently, soccer-twos-env relies on mlagents v0.27.0, which serves as a wrapper for the Unity-based soccer-twos game. However, we have observed that mlagents is an unstable dependency that frequently…
-
Hello everyone,
I encountered an unknown error while training my original Hexapod using reinforcement learning.
First, I get this warning: `[Warning] [omni.ujitso] UJITSO: Build storage validat…
-
Hi, would it be possible for MarsExplorer to be upgraded from gym to gymnasium? Gymnasium is the maintained version of openai gym and is compatible with current RL training libraries ([rllib](https://…
-
I've been digging into Brax as a potential alternative to some modified dm_control enviornments I've been using and am really loving the speedup! That said, I feel like I've run into a major issue usi…
-
你好,请问这种情况怎么解决呢?是20章的代码问题
-
This is a feature request.
I was playing around with rl_coach and it automatically generates a json file that contains all of an experiment's parameters. It's really helpful for repeatability and rec…
-
Hello, xuxin, thank you for your open-source work, it has been a great help to me. However, during step 2, which is the distillation process, I set up 4096 environments and trained for 5000 rounds. Af…
-
Hi, would it be possible for gym-softrobot to be upgraded from gym to gymnasium? Gymnasium is the maintained version of openai gym and is compatible with current RL training libraries ([rllib](https:/…
-
Hi,
In the most of RL implementations at the start of each episode, the environment (in SARSA code for instance: state = env.reset() ) is reset to the initial states (i.e. same start point and goals …
-
Could you please share the Unity scene for Vehicle environment static and dynamic? The environments for these are not compatible with the Unity_ML_Agent code. If we try to train using any of the RL_Al…