spragunr / deep_q_rl

Theano-based implementation of Deep Q-learning
BSD 3-Clause "New" or "Revised" License
1.08k stars 348 forks source link

ValueError: total size of new array must be unchanged #8

Closed ghost closed 9 years ago

ghost commented 9 years ago

Hi Nathan,

I'm trying to get the minimal actions branch working with a fresh pull? I've done a fresh install of ALE with the patch, as well?

Its weird as now neither master, or the minimal actions branch are working? Maybe if you have a few mins spare?

I'm guessing its a problem with the ALE patch, as everything was good till I rebuilt ALE fresh with it?

I get.

/usr/bin/python2.7 /home/ajay/PythonProjects/deep_q_rl-minimal_actions/deep_q_rl/ale_run.py RL-Glue Version 3.04, Build 909 A.L.E: Arcade Learning Environment (version 0.4) [Powered by Stella] Use -help for help screen. Warning: couldn't load settings file: ./stellarc Game console created: ROM file: /home/ajay/ale_0_4/roms/pong.bin Cart Name: Video Olympics (1978) (Atari) Cart MD5: 60e0ea3cbe0913d39803477945e9e5ec Display Format: AUTO-DETECT ==> NTSC ROM Size: 2048 Bankswitch Type: AUTO-DETECT ==> 2K

Running ROM file... Random Seed: Time Game will be controlled through RL-Glue. RL-Glue Python Experiment Codec Version: 2.02 (Build 738) Connecting to 127.0.0.1 on port 4096... Initializing ALE RL-Glue ... Using gpu device 0: GeForce GTX 570 RL-Glue Python Agent Codec Version: 2.02 (Build 738) Connecting to 127.0.0.1 on port 4096... Agent Codec Connected (32, 4, 80, 80) (4, 80, 80, 32) (16, 19.0, 19.0, 32) (32, 9.0, 9.0, 32) (32, 32, 9.0, 9.0) (32, 256) (32, 18) /home/ajay/bin/Theano-master/theano/gof/cmodule.py:289: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility rval = import(module_name, {}, {}, [module_name]) OPENING _01-28-04-27_0p0001_0p9/results.csv training epoch: 1 steps_left: 50000 Traceback (most recent call last): File "./rl_glue_ale_agent.py", line 430, in main() File "./rl_glue_ale_agent.py", line 426, in main AgentLoader.loadAgent(NeuralAgent()) File "/usr/local/lib/python2.7/dist-packages/rlglue/agent/AgentLoader.py", line 58, in loadAgent client.runAgentEventLoop() File "/usr/local/lib/python2.7/dist-packages/rlglue/agent/ClientAgent.py", line 144, in runAgentEventLoop switchagentState File "/usr/local/lib/python2.7/dist-packages/rlglue/agent/ClientAgent.py", line 138, in Network.kAgentStart: lambda self: self.onAgentStart(), File "/usr/local/lib/python2.7/dist-packages/rlglue/agent/ClientAgent.py", line 51, in onAgentStart action = self.agent.agent_start(observation) File "./rl_glue_ale_agent.py", line 245, in agent_start self.last_img = np.array(self._resize_observation(observation.intArray)) File "./rl_glue_ale_agent.py", line 263, in _resize_observation img = observation.reshape(IMG_WIDTH, IMG_HEIGHT) ValueError: total size of new array must be unchanged Segmentation fault (core dumped) training epoch: 1 steps_left: 49995 training epoch: 1 steps_left: 49993 training epoch: 1 steps_left: 49991

ghost commented 9 years ago

YES IT'S WORKING NOW !!!

With the output layer showing 6 neurons - great stuff :))) Will let you know how it goes :)

Tried in on another machine, I probably made mistake rebuilding ALE?

(32, 9.0, 9.0, 32) (32, 32, 9.0, 9.0) (32, 256) (32, 6) /home/ajay/bin/Theano-master/theano/gof/cmodule.py:289: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility rval = import(module_name, {}, {}, [module_name]) OPENING _01-28-04-53_0p0001_0p9/results.csv training epoch: 1 steps_left: 50000 Simulated at a rate of 51.6674622406/s Average loss: 0.112534131997 training epoch: 1 steps_left: 49175