awjuliani / DeepRL-Agents

A set of Deep Reinforcement Learning Agents implemented in Tensorflow.
MIT License
2.24k stars 826 forks source link

why not support the Python3 #8

Open wwxFromTju opened 7 years ago

wwxFromTju commented 7 years ago

Hi, I use the Python3 and run your code, I find must change something, like / to //, print 'xxx' to print('xxx').

why not import _future_ and make sure the code can support 2.7 and 3?

and in GridWorld, the env take action, it always return done is False?

awjuliani commented 7 years ago

Hi, Adding Python3 support is something I would definitely like to do when I have time. Unfortunately when I began the series I did it purely with Python2. If you have happened to convert any of the notebooks to python 3, please consider a pull request, as I would be happy to integrate them into the notebook.

In Gridworld this functionality exists in case you might want to add a block which causes the episode to end early. For example, having the episode end once finding a goal.

wwxFromTju commented 7 years ago

Hi, I change some file to support the py3. Some error when I install vizdoom, so I don't change it. And you ues targetOps = updateTargetGraph(trainables,tau) in Deep-Recurrent-Q-Network. But in helper.py you just define the updateTargetGraph(tfVars). so I don't know how to change this file.

awjuliani commented 7 years ago

I will go through the A3C myself to ensure it is python 3 compatible. Thanks again for making the changes that you did.

GoingMyWay commented 7 years ago

@wwxFromTju , if you have successfully installed vizdoom under Python3, I think the code is compatible under Python3 if you change print to print() in helper.py. I can run the code under Python3.x. If you are a Chinese user, make sure you can download freedoom-0.10.1.zip due to GFW.