openai / universe-starter-agent

A starter agent that can solve a number of universe environments.
MIT License
1.1k stars 318 forks source link

AttributeError: 'VectorizeFilter' object has no attribute 'filter_n' #130

Closed wonchul-kim closed 6 years ago

wonchul-kim commented 6 years ago

I've downloaded the recent universe and executed this: pip install --upgrade universe

However, this issue still came up.....

[2017-10-23 13:43:20,102] Writing logs to file: /tmp/universe-4101.log 2017-10-23 13:43:20.105568: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-23 13:43:20.105595: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-23 13:43:20.105605: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-10-23 13:43:20.105611: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-23 13:43:20.105616: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-10-23 13:43:20.109914: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> 127.0.0.1:12222} 2017-10-23 13:43:20.109959: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:12223, 1 -> 127.0.0.1:12224} 2017-10-23 13:43:20.110892: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:12223 [2017-10-23 13:43:20,111] Making new env: PongDeterministic-v4 [2017-10-23 13:43:21,207] Trainable vars: [2017-10-23 13:43:21,207] global/l1/W:0 (3, 3, 1, 32) [2017-10-23 13:43:21,209] global/l1/b:0 (1, 1, 1, 32) [2017-10-23 13:43:21,209] global/l2/W:0 (3, 3, 32, 32) [2017-10-23 13:43:21,209] global/l2/b:0 (1, 1, 1, 32) [2017-10-23 13:43:21,209] global/l3/W:0 (3, 3, 32, 32) [2017-10-23 13:43:21,209] global/l3/b:0 (1, 1, 1, 32) [2017-10-23 13:43:21,209] global/l4/W:0 (3, 3, 32, 32) [2017-10-23 13:43:21,209] global/l4/b:0 (1, 1, 1, 32) [2017-10-23 13:43:21,209] global/rnn/basic_lstm_cell/kernel:0 (544, 1024) [2017-10-23 13:43:21,209] global/rnn/basic_lstm_cell/bias:0 (1024,) [2017-10-23 13:43:21,209] global/action/w:0 (256, 6) [2017-10-23 13:43:21,209] global/action/b:0 (6,) [2017-10-23 13:43:21,209] global/value/w:0 (256, 1) [2017-10-23 13:43:21,210] global/value/b:0 (1,) [2017-10-23 13:43:21,210] local/l1/W:0 (3, 3, 1, 32) [2017-10-23 13:43:21,210] local/l1/b:0 (1, 1, 1, 32) [2017-10-23 13:43:21,210] local/l2/W:0 (3, 3, 32, 32) [2017-10-23 13:43:21,210] local/l2/b:0 (1, 1, 1, 32) [2017-10-23 13:43:21,210] local/l3/W:0 (3, 3, 32, 32) [2017-10-23 13:43:21,210] local/l3/b:0 (1, 1, 1, 32) [2017-10-23 13:43:21,210] local/l4/W:0 (3, 3, 32, 32) [2017-10-23 13:43:21,210] local/l4/b:0 (1, 1, 1, 32) [2017-10-23 13:43:21,210] local/rnn/basic_lstm_cell/kernel:0 (544, 1024) [2017-10-23 13:43:21,210] local/rnn/basic_lstm_cell/bias:0 (1024,) [2017-10-23 13:43:21,211] local/action/w:0 (256, 6) [2017-10-23 13:43:21,211] local/action/b:0 (6,) [2017-10-23 13:43:21,211] local/value/w:0 (256, 1) [2017-10-23 13:43:21,211] local/value/b:0 (1,) [2017-10-23 13:43:21,211] Events directory: /tmp/pong/train_0 [2017-10-23 13:43:21,338] Starting session. If this hangs, we're mostly likely waiting to connect to the parameter server. One common cause is that the parameter server DNS name isn't resolving yet, or is misspecified. 2017-10-23 13:43:21.380463: I tensorflow/core/distributed_runtime/master_session.cc:999] Start master session 29dafb83b925e4f3 with config: intra_op_parallelism_threads: 1 device_filters: "/job:ps" device_filters: "/job:worker/task:0/cpu:0" inter_op_parallelism_threads: 2

[2017-10-23 13:43:21,750] Starting training at step=0 Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/home/icsl/Downloads/universe-starter-agent-master/a3c.py", line 93, in run self._run() File "/home/icsl/Downloads/universe-starter-agent-master/a3c.py", line 102, in _run self.queue.put(next(rollout_provider), timeout=600.0) File "/home/icsl/Downloads/universe-starter-agent-master/a3c.py", line 112, in env_runner last_state = env.reset() File "/home/icsl/gym/gym/core.py", line 104, in reset return self._reset() File "/usr/local/lib/python2.7/dist-packages/universe/wrappers/vectorize.py", line 46, in _reset observation_n = self.env.reset() File "/home/icsl/gym/gym/core.py", line 104, in reset return self._reset() File "/usr/local/lib/python2.7/dist-packages/universe/vectorized/vectorize_filter.py", line 30, in _reset observation_n = [filter._after_reset(observation) for filter, observation in zip(self.filter_n, observation_n)] AttributeError: 'VectorizeFilter' object has no attribute 'filter_n'