DorianKodelja / DeepMind-Atari-Deep-Q-Learner-2Player

14 stars 39 forks source link

Crash in ale::StellaEnvironment::emulate() #2

Open watts4speed opened 8 years ago

watts4speed commented 8 years ago

I build a couple times, removing all sources and starting again to make sure I didn't mess anything up and when I run I get the crash below. I looked at it in gdb and below is the output. I've run ALE and the torch version of DQN and it's worked. This is unique to this implementation. Any ideas what might be going on?

ze=512,valid_size=500,target_q=10000,clip_delta=1,min_reward=-1,max_reward=1 -steps 50000000 -eval_freq 250000 -eval_steps 125000 -prog_freq 5000 -save_freq 125000 -save_versions 125000 -actrep 4 -gpu -1 -random_starts 30 -pool_frms type=\"max\",size=2 -seed 1 -threads 4 [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". [New Thread 0x7fffe89f6700 (LWP 3680)] [New Thread 0x7fffe3fff700 (LWP 3681)] Torch Threads: 4 Using CPU code only. GPU device id: -1 Torch Seed: 1

Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffe3fff700 (LWP 3681)] 0x00007fffdbc7bb5a in ale::StellaEnvironment::emulate(ale::Action, ale::Action, unsigned long) () from /home/pmerrill/dev/DeepMind-Atari-Deep-Q-Learner-2Player/torch/lib/libxitari.so (gdb) q

tambetm commented 8 years ago

Are you running 1-player training code (run_gpu)? We haven't updated it regarding the 2-player modifications in Xitari, and that might be the cause. Currently only 2-player mode is supported (run_gpu2).

Also the development continues in the following repository: https://github.com/NeuroCSUT/DeepMind-Atari-Deep-Q-Learner-2Player/

watts4speed commented 8 years ago

Thanks for responding. I tried the DeepMind-Atari-Deep-Q-Learner-2Player and it crashed in ALE. I reported the problem in github

On Thu, Nov 19, 2015 at 10:09 PM, tambetm notifications@github.com wrote:

Are you running 1-player training code (run_gpu)? We haven't updated it regarding the 2-player modifications in Xitari, and that might be the cause. Currently only 2-player mode is supported (run_gpu2).

Also the development continues in the following repository: https://github.com/NeuroCSUT/DeepMind-Atari-Deep-Q-Learner-2Player/

— Reply to this email directly or view it on GitHub https://github.com/DrZomgwtfbbq/DeepMind-Atari-Deep-Q-Learner-2Player/issues/2#issuecomment-158292456 .