nadavbh12 / Retro-Learning-Environment

The Retro Learning Environment (RLE) -- a learning framework for AI
Other
185 stars 40 forks source link

deep_q_rl error #10

Closed tohnperfect closed 7 years ago

tohnperfect commented 7 years ago

After I ran the following command,

$ python run_nips.py --rom mortal_kombat.sfc --core snes

I encountered this error,

Using gpu device 0: GeForce GTX 780 (CNMeM is disabled, cuDNN 5105) /home/localadmin/anaconda/lib/python2.7/site-packages/theano/sandbox/cuda/init.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5. warnings.warn(warn) Warning: couldn't load settings file: R.L.E: Retro Learning Environment (version 1.0.0) [Based upon the Arcade Learning Environment (A.L.E)] [Powered by LibRetro] Use -help for help screen. [inf] Frontend supports RGB565 - will use that instead of XRGB1555. Sound buffer size: 128000 (32000 samples) Core loaded [inf] No ROM file header found. Map_LoROMMap PPU.RenderSub = 0 PPU.FullClipping = 1 Settings.Transparency = 1 Settings.SpeedhackGameID = 0 PPU.SFXSpeedupHack = 0 coldata_update_screen = 1 [inf] "MORTAL KOMBAT" [checksum ok] LoROM, 16Mbits, ROM, NTSC, SRAM:0Kbits, ID:__, CRC32:DEF42945 Running ROM file... Random seed is 65 INFO:root:OPENING mortal_kombat_02-08-17-21_0p0002_0p_95/results.csv INFO:root:training epoch: 1 steps_left: 50000 Traceback (most recent call last): File "run_nips.py", line 62, in launcher.launch(sys.argv[1:], Defaults, doc__) File "/media/localadmin/DATA/git/deep_q_rl/deep_q_rl/launcher.py", line 306, in launch experiment.run() File "/media/localadmin/DATA/git/deep_q_rl/deep_q_rl/ale_experiment.py", line 55, in run self.run_epoch(epoch, self.epoch_length) File "/media/localadmin/DATA/git/deep_q_rl/deep_q_rl/ale_experiment.py", line 86, in runepoch , num_steps = self.run_episode(steps_left, testing) File "/media/localadmin/DATA/git/deep_q_rl/deep_q_rl/ale_experiment.py", line 169, in run_episode action_a = self.agent.step(reward, self.get_observation()) File "/media/localadmin/DATA/git/deep_q_rl/deep_q_rl/ale_agent.py", line 198, in step np.clip(reward, -1, 1)) File "/media/localadmin/DATA/git/deep_q_rl/deep_q_rl/ale_agent.py", line 212, in _choose_action data_set.add_sample(self.last_img, self.last_action, reward, False) File "/media/localadmin/DATA/git/deep_q_rl/deep_q_rl/ale_data_set.py", line 57, in add_sample self.imgs[self.top] = img ValueError: could not broadcast input array from shape (10,84) into shape (84,84)

I am sorry to post the issue of the deep_q_rl here because I could not create the issue there.

makman7 commented 7 years ago
  1. Try to install V4 of CuDNN
  2. in the run_nips_py, Switch RESIZE_METHOD = 'crop' to 'RESIZE_METHOD = 'scale' It will solve the broadcasting issue. Good luck
tohnperfect commented 7 years ago

Thank you @Noor59007. Should I do both or do either of these? As I don't want to downgrade cudnn and/or cuda toolkit.

sehar146 commented 7 years ago

The best possible solution is to downgrade v4 of cudnn with latest cuda. If you don't want it, you may have to upgrade the repository's code on latest theano. which is difficult. (you may upggrade cudnn anytime you need). The crop problem can be solved with 2nd suggestion in a short time. In both cases, the purpose is to display the 84X84 img....

On Thu, Feb 9, 2017 at 9:18 AM, tohnperfect notifications@github.com wrote:

Thank you @Noor59007 https://github.com/Noor59007. Should I do both or do either of these? As I don't want to downgrade cudnn and/or cuda toolkit.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/nadavbh12/Retro-Learning-Environment/issues/10#issuecomment-278506396, or mute the thread https://github.com/notifications/unsubscribe-auth/AMXORFntTEKLdv3hBHcnZs-nFTSM7RySks5raltVgaJpZM4L7H0t .

-- Kind Regards:

Sehar Shahzad Farooq

tohnperfect commented 7 years ago

Thank you very much @sehar146

tohnperfect commented 7 years ago

I moved to test the RLE library with the tensorflow implementation of the DQN algorithm, this repo.