I am getting Observation values greater than 1 when I am running train_rllib_agent.py
(using master branch of this repo)
2020-03-26 18:21:35,305 INFO resource_spec.py:212 -- Starting Ray with 3.27 GiB memory available for workers and up to 1.64 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-03-26 18:21:36,115 INFO services.py:1078 -- View the Ray dashboard at localhost:8265
2020-03-26 18:21:36,904 INFO trainer.py:420 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2020-03-26 18:21:37,054 INFO trainer.py:580 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
INFO:root:Specified torcs_server_port 60934 is not available. Searching for alternative...
INFO:root:torcs_server_port has been reassigned to 44366
INFO:root:-------------------------CURRENT TRACK:forza------------------------
UDP Timeout set to 10000000 10E-6 seconds.
Laptime limit disabled!
Noisy Sensors!
Waiting for request on port 44366
/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/from_config.py:134: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
obj = yaml.load(type_)
(pid=17992) /home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
(pid=17992) warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
(pid=17992) /home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/from_config.py:134: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
(pid=17992) obj = yaml.load(type_)
Traceback (most recent call last):
File "train_rllib_agent.py", line 29, in <module>
result = trainer.train()
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 494, in train
raise e
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 483, in train
result = Trainable.train(self)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/tune/trainable.py", line 254, in train
result = self._train()
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 133, in _train
fetches = self.optimizer.step()
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 137, in step
self.num_envs_per_worker, self.train_batch_size)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/optimizers/rollout.py", line 25, in collect_samples
next_sample = ray_get_and_free(fut_sample)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/memory.py", line 29, in ray_get_and_free
result = ray.get(object_ids)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/worker.py", line 1504, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(ValueError): ray::RolloutWorker.sample() (pid=17992, ip=192.168.0.52)
File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 430, in ray._raylet.execute_task.function_executor
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 488, in sample
batches = [self.input_reader.next()]
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 52, in next
batches = [self.get_data()]
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 95, in get_data
item = next(self.rollout_provider)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 315, in _env_runner
soft_horizon, no_done_at_end)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 404, in _process_observations
policy_id).transform(raw_obs)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/models/preprocessors.py", line 162, in transform
self.check_shape(observation)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/models/preprocessors.py", line 61, in check_shape
self._obs_space, observation)
ValueError: ('Observation outside expected value range', Box(60,), array([ 5.56550782e-08, 2.02869512e-02, 5.94304986e-02, 7.98150003e-02,
1.37273997e-01, 2.09871992e-01, 3.48023504e-01, 6.04140043e-01,
7.67295003e-01, 8.23809981e-01, 1.00119007e+00, 8.62675011e-01,
8.87974977e-01, 8.75970006e-01, 8.09940040e-01, 4.89742011e-01,
3.08934003e-01, 1.72173008e-01, 1.03631496e-01, 4.32050526e-02,
3.33332986e-01, 0.00000000e+00, 0.00000000e+00, -2.22880002e-04,
1.00978994e+00, 9.86819983e-01, 9.75160003e-01, 9.67469990e-01,
1.01222503e+00, 9.89215016e-01, 9.94629979e-01, 1.01731002e+00,
9.55635011e-01, 1.00125504e+00, 1.00168002e+00, 9.95970011e-01,
9.77665007e-01, 1.01389503e+00, 9.82594967e-01, 9.79219973e-01,
1.04600501e+00, 9.87020016e-01, 1.00498998e+00, 1.00105000e+00,
9.85194981e-01, 1.01610005e+00, 1.01473498e+00, 9.86200035e-01,
9.56799984e-01, 9.75569963e-01, 9.90504980e-01, 9.76555049e-01,
9.68820035e-01, 1.00439501e+00, 1.00737500e+00, 1.00959003e+00,
1.03243494e+00, 9.96204972e-01, 1.01459002e+00, 1.02346492e+00]))
2020-03-26 18:38:33,066 INFO resource_spec.py:212 -- Starting Ray with 2.69 GiB memory available for workers and up to 1.35 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-03-26 18:38:33,488 INFO services.py:1078 -- View the Ray dashboard at localhost:8265
2020-03-26 18:38:33,794 INFO trainer.py:420 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2020-03-26 18:38:33,874 INFO trainer.py:580 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
INFO:root:Specified torcs_server_port 60934 is not available. Searching for alternative...
INFO:root:torcs_server_port has been reassigned to 36193
INFO:root:-------------------------CURRENT TRACK:forza------------------------
UDP Timeout set to 10000000 10E-6 seconds.
Laptime limit disabled!
Noisy Sensors!
Waiting for request on port 36193
/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/from_config.py:134: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
obj = yaml.load(type_)
(pid=19494) /home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
(pid=19494) warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
(pid=19494) /home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/from_config.py:134: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
(pid=19494) obj = yaml.load(type_)
Traceback (most recent call last):
File "train_rllib_agent.py", line 29, in <module>
result = trainer.train()
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 494, in train
raise e
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 483, in train
result = Trainable.train(self)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/tune/trainable.py", line 254, in train
result = self._train()
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 133, in _train
fetches = self.optimizer.step()
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 137, in step
self.num_envs_per_worker, self.train_batch_size)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/optimizers/rollout.py", line 25, in collect_samples
next_sample = ray_get_and_free(fut_sample)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/memory.py", line 29, in ray_get_and_free
result = ray.get(object_ids)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/worker.py", line 1504, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(NameError): ray::RolloutWorker.sample() (pid=19494, ip=192.168.0.52)
File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 430, in ray._raylet.execute_task.function_executor
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 488, in sample
batches = [self.input_reader.next()]
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 52, in next
batches = [self.get_data()]
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 95, in get_data
item = next(self.rollout_provider)
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 301, in _env_runner
base_env.poll()
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/env/base_env.py", line 308, in poll
self.new_obs = self.vector_env.vector_reset()
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/env/vector_env.py", line 96, in vector_reset
return [e.reset() for e in self.envs]
File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/env/vector_env.py", line 96, in <listcomp>
return [e.reset() for e in self.envs]
File "/home/saivinay/Documents/TORCS/MADRaS/MADRaS/envs/gym_madras.py", line 244, in reset
s_t = self.observation_manager.get_obs(self.ob, self._config)
File "/home/saivinay/Documents/TORCS/MADRaS/MADRaS/utils/observation_manager.py", line 20, in get_obs
full_obs = self.normalize_obs(full_obs, game_config)
File "/home/saivinay/Documents/TORCS/MADRaS/MADRaS/utils/observation_manager.py", line 31, in normalize_obs
key, key, key, key))
File "<string>", line 1, in <module>
NameError: name 'angle' is not defined
I am getting Observation values greater than 1 when I am running train_rllib_agent.py (using master branch of this repo)
When i set
normalize = True
in sim_options.ymlI am getting this error :
Can someone help regarding this problem?