google-deepmind / pysc2

StarCraft II Learning Environment
Apache License 2.0
7.99k stars 1.15k forks source link

Tutorials #64

Open skjb opened 7 years ago

skjb commented 7 years ago

Not sure if you are interested but I have written a tutorial for building a basic agent:

https://medium.com/@skjb/building-a-basic-pysc2-agent-b109cde1477c https://medium.com/@skjb/building-a-smart-pysc2-agent-cdc269cb095d https://medium.com/@skjb/add-smart-attacking-to-your-pysc2-agent-17fd5caad578 https://medium.com/@skjb/build-a-sparse-reward-pysc2-agent-a44e94ba5255

zsluedem commented 7 years ago

nice

SoyGema commented 7 years ago

The explanation comes clear and focused. Congrats I am having problems to execute simple_agent.py . Where should it be placed? Tried at $path>pysc2>agents also at $path>pysc2>bin

This is the error it gives me . Thanks for the time dedicated to solve this issue python3 -m pysc2.bin.agent --map Simple64 --agent simple_agent.SimpleAgent --agent_race T Traceback (most recent call last): File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/site-packages/pysc2/bin/agent.py", line 112, in <module> app.run(main) File "/usr/local/lib/python3.6/site-packages/pysc2/lib/app.py", line 57, in run really_start(main or sys.modules["__main__"].main) File "/usr/local/lib/python3.6/site-packages/pysc2/lib/app.py", line 51, in really_start sys.exit(main(argv)) File "/usr/local/lib/python3.6/site-packages/pysc2/bin/agent.py", line 90, in main agent_cls = getattr(importlib.import_module(agent_module), agent_name) File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 978, in _gcd_import File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 948, in _find_and_load_unlocked ModuleNotFoundError: No module named 'simple_agent'

skjb commented 7 years ago

@SoyGema I am running this on Windows with PySC2 installed via pip, I was able to get it working by running the command in the same directory as the simple_agent.py file.

Since you are on Linux things may be different with regard to paths or permissions.

OriolVinyals commented 6 years ago

Hi,

Since you seem to be doing pretty cool stuff with the PySC2, I'd recommend checking out and applying to come work with us at Blizzcon in LA (Nov 3/4). Spots limited, and some travel funds available!

Info: http://us.battle.net/sc2/en/blog/21048078/announcing-the-starcraft-ii-ai-workshop-10-4-2017

GL&HF, Oriol

skjb commented 6 years ago

Updated with a new article that uses reinforcement learning to build on the previous simple agent:

https://medium.com/@skjb/building-a-smart-pysc2-agent-cdc269cb095d

yeungegs commented 6 years ago

@skjb awesome work on the tutorial for Terran! I am working on a similar tutorial for protoss and will definitely link here when finished.

pekaalto commented 6 years ago

Here https://github.com/pekaalto/sc2atari is an example how to crudely simplify the environment and then "solve" the easiest mini-games using reinforcement-learning. (Not exactly a tutorial but maybe this is the best topic to dump this in)

greentfrapp commented 6 years ago

I've adapted Arthur Juliani's A3C code for the DefeatRoaches minigame and managed to get comparable results with limited hardware, based on the Atari-net architecture (Section 4.3) from DeepMind's paper.

I managed to run the script on my laptop (no GPU) with 4 threads (dual-core) for the A3C algorithm, completing about 2 to 3 episodes per second.

After 50 million steps on DefeatRoaches, the agent achieves a max-score of 338 and avg-score of 65. This seems somewhat comparable to DeepMind's results - their Atari-net agent has max-score 351 and avg-score 101 after 600M steps on DefeatRoaches.

Script here: https://github.com/greentfrapp/pysc2-RLagents/blob/master/Agents/PySC2_A3C_AtariNet.py

The latest version of the script works on all the minigames! Will be working on implementing the FullyConv and FullyConvLSTM architectures as well!

Hope this helps anyone who's looking to work on this!

skjb commented 6 years ago

Updated to include my latest tutorial on how to add reinforcement learning to the attack component of an agent: https://medium.com/@skjb/add-smart-attacking-to-your-pysc2-agent-17fd5caad578

zmcddn commented 6 years ago

@skjb I had the same problem with @SoyGema I am using Anaconda with Python 3.6.2 installed on Win 10. I think the problem of "ModuleNotFoundError: No module named 'simple_agent'" occurred because of the "getattr" function in line 90 of "agent.py" file comes with the pysc2 package. My error message looks like:

Traceback (most recent call last): File "D:\Anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "D:\Anaconda3\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\Anaconda3\envs\tensorflow\lib\site-packages\pysc2\bin\agent.py", line 112, in app.run(main) File "D:\Anaconda3\envs\tensorflow\lib\site-packages\absl\app.py", line 274, in run _run_main(main, argv) File "D:\Anaconda3\envs\tensorflow\lib\site-packages\absl\app.py", line 238, in _run_main sys.exit(main(argv)) File "D:\Anaconda3\envs\tensorflow\lib\site-packages\pysc2\bin\agent.py", line 90, in main agent_cls = getattr(importlib.import_module(agent_module), agent_name) File "D:\Anaconda3\envs\tensorflow\lib\importlib__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 994, in _gcd_import File "", line 971, in _find_and_load File "", line 953, in _find_and_load_unlocked ModuleNotFoundError: No module named 'simple_agent'

I was trying to hard code the absolute path inside the "agent.py" file to get around this error, but later figured that I should probably keep the "agent.py" file original. Please help! Any suggestion is fine. Thanks a lot.

zmcddn commented 6 years ago

Hi @skjb @SoyGema , I have figured it out. It is a simple path problem but not because of the anaconda environment. As you mentioned, I should run it in where the "agent" file is located. To be specific, for example, if you put the "agent" file under D:\, then you should run it under the D:\ directory. For my self, I run it as follows:

(tensorflow) D:\Anaconda3\envs\tensorflow\Lib\site-packages\pysc2\bin>python -m pysc2.bin.agent --map Simple64 --agent attack_agent.AttackAgent --agent_race T --max_agent_steps 0 --norender

This simple mistake was due to the lack of experience of running Python in cmd environment for myself. Thank you @skjb for your detailed tutorial.

skjb commented 6 years ago

Updated to add my latest tutorial that covers how to create an agent that uses sparse rewards and Q Learning: https://medium.com/@skjb/build-a-sparse-reward-pysc2-agent-a44e94ba5255

SoyGema commented 6 years ago

Ey Steven this is Gema from the workshop! I am trying to implement q-learning in a sentry based agent for HallucinIce q_learning_agent.py

It launches the following error `Mac-Gema:projects gema$ python3 -m pysc2.bin.agent --agent q_learning_agent.SmartAgent --map HallucinIce

Traceback (most recent call last): File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/site-packages/pysc2/bin/agent.py", line 112, in app.run(main) File "/usr/local/lib/python3.6/site-packages/absl/app.py", line 272, in run _run_main(main, argv) File "/usr/local/lib/python3.6/site-packages/absl/app.py", line 237, in _run_main sys.exit(main(argv)) File "/usr/local/lib/python3.6/site-packages/pysc2/bin/agent.py", line 90, in main agent_cls = getattr(importlib.import_module(agent_module), agent_name) File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 978, in _gcd_import File "", line 961, in _find_and_load File "", line 948, in _find_and_load_unlocked ModuleNotFoundError: No module named 'q_learning_agent'`

As I can read the module is not find.

In projects folder y do have installed pysc2 and in pysc2>agents>q_learning_agent.py Do you know why it does not find the module ?

Thanks for helping me to get out of this !!!

skjb commented 6 years ago

@SoyGema You do not need to put the agent inside the pysc2 directory, simply any directory in your Python path, then run the command from the same directory

backnotprop commented 6 years ago

@skjb sorry if this comes off as naive or if you address this. But are there any deep-learning aspects to this (training agents)? if so, what type of hardware should we look to best run the training on? Asking because if I set up a PC, I want to make sure I don't over/under-do it. I look forward to your tutorials when time permits, but just want to make sure I'm all set first (hardware/cloud setup).

BurningSpy commented 5 years ago

still useful a year later. I'm trying to figure out how to do the whole qlearning thing in 2.0 and it works fine, but I'm having problems getting my workers back to mining minerals after they built something How can I solve that?