-
What are the equation of motion for robot from FetchReach-v1 and other Fetch* environments? I want to apply iLQR to this environment. Can anyone help in that? Thanks
-
**Describe the bug**
FetchReach-v1 seems to be changes target position from step 200 to step 220. I present two images below.
Shouldn't the goal be the same throughout the episode?
**…
-
Rendering in both `human` and `rgb_array` alternatingly results in a segfault.
MWE:
```python
import gym
env = gym.make("FetchReach-v1")
env.render("human") # Renders properly
env.rende…
-
TD3 proved to be better than DDPG (NOPE!)
https://towardsdatascience.com/td3-learning-to-run-with-ai-40dfc512f93
-
Dear Jiayi,
Related to issue #532, when the observation space is a Dict, how to get the input dimension for a network model since the code line (state_shape = env.observation_space.shape or env.obs…
-
### Question
I want to change the distance_threshold value to improve the difficulty of the fetch task.
Like handenv, I can use `env = gym.make('HandReach-v0',distance_threshold=0.001)`, but `env …
-
First of all, thank you very much for writing a pytorch implementation of DDPG+HER.
I found that this implementation works very well for all of the Fetcher environments available in gym.
Example: Fe…
cm107 updated
2 years ago
-
## Describe the bug
Cannot load pre-trained PPO model in script "train_rl.py"
## System Specifications
- Ubuntu Version: 18.0.4
- Gym: 0.19
- Stable Baselines 3: 1.2.0
- highway_env: latest
…
-
I am trying to train FetchPickAndPlace as per https://arxiv.org/pdf/1802.09464.pdf using DDPG+HER, however, regardless of how long I train, agent fails to learn anything. I saw that #198 mentioned tha…
-
Firstly, thanks for the great open source environments! I was having lots of issues with the Mujoco licence so these are very useful.
I noticed that the state-spaces of these pmg tasks contain less…