Open jsalfity-hplabs opened 5 years ago
I think environments can change when you add RoboschoolForwardWalkerMujocoXML.__init__(self, self.model_xml, 'torso', action_dim=6, obs_dim=26, power=0.9)
in randomize_env(self)
. I can be wrong because I'm using different version of roboschool.
Great work! We are trying to replicate your experiments.
Description of our setup - Ubuntu 16.04, installed rl-generalization and docker using the install instructions given in readme.
We came across an interesting bug that seems incorrect. We wanted to see the performance of HalfCheetah when only varying density so we ran
python -m examples.run_experiments examples/test_density.yml /tmp/output
with the following yml fileand with SunblazeHalfCheetahRandomExtreme only changing the density to 1000000 in
mujoco.py
as below:Looking at the json output of
run_experiments
, theSunblazeHalfCheetah
model testing reward on both theSunblazeHalfCheetah
andSunblazeHalfCheetahRandomExtreme
(with density manually set to 1000000) are nearly the same. Last 2 rewards of both testing environments below:How can we confirm the density is changing? It doesn't seem logical that the Mujoco HalfCheetah simulation should be able to move at all given a density of 1000000 nor have similar testing rewards to the nominal environment.