-
Hi,
(1)
I am trying to reduce the required samples per epoch to make the epoch going faster. It seems that garage has a default TotalEnvSteps (to let each worker finish 128 periods?). I thought I …
-
Hello, I tried one of most basic codes of openai gym trying to make FetchReach-v1 env like this.
```
`import gym
env = gym.make("FetchReach-v1")`
```
However, the code didn't work and gave th…
-
The action space for `FetchReach-v1` is `Box(4,)`. When I print the actions I get:
`[ 0.11746755 -0.56623757 -0.05967733 0.36495462]`
According to the HER paper, the first 3 dimensions specify…
-
We really appreciate the effort you put in creating the "gym" interface of the dVRkit. However, we cannot run the simple ddpg+her algorithm, with your simulated environment. Since you have built on to…
gchal updated
4 years ago
-
Hi,
I was trying to run examples/tf/[her_ddpg_fetchreach.py](https://github.com/rlworkgroup/garage/blob/master/examples/tf/her_ddpg_fetchreach.py) but got a much worse performance. I attached the r…
-
FetchReach-v1 has the following characteristics:
```
Action Space: Box(4,)
Observation Space: Dict(achieved_goal:Box(3,), desired_goal:Box(3,), observation:Box(10,))
```
I'm printing out the …
-
Hi there,
do you know how that mpirun error about?
When i use :mpirun -np 8 python -m baselines.run --alg=her --env= FetchReach-v1 --num_timesteps=2e7
Error: node list format not recognized. Tr…
-
I am training HER on FetchReach for 10k iterations:
`
python3 -m baselines.run --alg=her --env=FetchReach-v1 --num_timesteps=10000 --save_path=~/models/testRum_HER_reach_10k --log_path=~/logs/HER/…
-
In OpenAI Gym `reward` is defined as:
> reward (float): amount of reward achieved by the previous action. The
> scale varies between environments, but the goal is always to increase
> your total…
-
How do I set the hyper-parameters to automatically read from a text file as seen in [rl-baselines-zoo](https://github.com/araffin/rl-baselines-zoo/blob/master/hyperparams/her.yml)?
Currently I am g…