Closed makman7 closed 7 years ago
The python example is of a random agent. To run DQN on RLE, checkout this repository.
I wonder if this RLE environment is used to produce the Table 3 in the paper "Playing SNES with RLE" especially for Mortal_Kombat? If we impose the above mentioned repository to the RLE, should we get the results?
The deep_w_rl repository should be able to reproduce the the results for the first column (DQN). The results in our paper were achieved by running a Torch based agent which supports DQN, DDQN and D-DQN. However, there are many implementation for DQN around on GitHub. It should be easy to modify them to use RLE rather than ALE as their interface is nearly identical.
Notice that the results for Mortal Kombat were achieved using random initialization at the beginning of each level.
You can set this by calling SetBool("MK_random_position", True)
.
Hi @nadavbh12,
I am working on this repository but i got an issue while running this command pip install --user .
and here is the screen shot:
Please guide me where i am going wrong. Thanks
Try removing the CMakeCache.txt file from the project's main directory and re-running the command. I think this should resolve the issue.
Thanks, I fix this issue, but next when I run this command ./run_nips.py --rom mortal_kombat.sfc --core snes
I got error related to core. Here is my rom directory with rom file.
And Here is my core directory
And here is the output:
full output is in this file:
log.txt
Hi, I have solved the above issue. However, there is another problem now. Attached is the screenshot.
Hey @Noor59007, It seems that your first issue was due to a python installation commit that I'm working on. I've edited the dependency script in deep_q_rl so that it will get RLE's previous release where the older setup.py script is present.
Regarding the new issue, I was unable to reproduce it. Did you modify the files in deep_q_rl? Could you check the dimensions of the screen_buffer as returned from the interface in ale_experiment.py line 120?
Yeah, there was some path issues for the core file, So, we modified it somehow. I found out these dimensions. [256, 224] However, the error is still existing.
84x84 is the cropped image size. It seems that for some reason your image is only 10x84. Try following the image object with a debugger and see when its size changes. Since I wasn't able to reproduce this and the image size is as expected, a wild guess would be to check your numpy is updated and working correctly.
If that doesn't work, try running the original atari version of deep_q_rl so we're sure the problem is with my fork rather than your setup.
I run environment through python interface from doc/example like $ python python_example.py path_to_rom path_to_core I modified the code set episode to 2000 and the training was running for 1 day but the agent is not learning. I searched the code but couldn't find the module for DQN. Please kindly Help.