-
Hey Kevin,
I hope you are doing well. I noticed a small bug where the step function returns only `obs, reward, done, info` instead of the `obs, reward, terminated, truncated, info`. I came across th…
-
### Required prerequisites
- [X] I have read the documentation .
- [X] I have searched the [Issue Tracker](https://github.com/PKU-Alignment/omnisafe/issues) and [Discussions](https://github.com/PKU-A…
-
Hi, I found several environment errors on the gym official website, take ```hale_cheetah``` [https://www.gymlibrary.dev/environments/mujoco/half_cheetah/](url) as an example:
![screenshot-2024-04-3…
-
Hi,
Congrats by this muzero implementation.
How can I run a gym environment using your code? It's possible?
Or need I to convert my environment to be nle compatible?
Thanks in advanced.
-
Hello. I'm using gym-pybuller-drones as my simulation platform to design and test collision avoidance algorithms for agile drones. I would like to create a custom simulation environment filled with cl…
-
Anyone have a way to rectify the issue? Thanks
-
Hello, Tianying and Yongyuan.
When running ACE with default parameters in the Adroit Hand environment, I've noticed that even after several times of policy learning, it fails to achieve results comp…
-
Hi, I want to change the Panda into UR3 when using the env = gym.make('reach_target-state-v0', render_mode='human'). But it seems that even if I make changes into env = Environment(
action_mode,…
-
I built an OpenAI Gym Environment on top of `sapai` that can be used for reinforcement learning. I thought I would link it here since most people here are probably interested in building ML/AI for Sup…
-
Gym API supports MultiDiscrete action spaces:
https://github.com/openai/gym/blob/master/gym/spaces/multi_discrete.py
This is useful when you want to discretize a continuous control problem, a tec…