Demos of LLM-SMAC tasks
LLM-SMAC depends on the full StarCraft II game and works with version of burnysc2/python-sc2 package.
Follow Blizzard's documentation to
get the linux version. By default, LLM-SMAC expects the game to live in
~/StarCraftII/
. You can override this path by setting the SC2PATH
environment variable or creating your own run_config.
Install of the game as normal from Battle.net. Even the
Starter Edition will work.
If you used the default install location LLM-SMAC should find the latest binary.
If you changed the install location, you might need to set the SC2PATH
environment variable with the correct location.
Note that the version of StarCraft II on Linux platform supports upto 4.10, however, the latest version on Windows/Macos is upto 5.0.13. Therefore, the results across the operating system might not be consistent.
Download the LLM-SMAC code from this github page.
Use pip install to initialize the environment:
$ conda create --name YOUR_ENV_NAME python==3.10
$ conda activate YOUR_ENV_NAME
$ pip install -r requirements.txt
You may fill your api key to the file 'configs\llm_api_config.py'
class LLMAPIConfig:
# LLM 配置
MODELS = {
"deepseek-chat": LLMModelConfig(
api_key="Your API key here.",
base_url="Base URL here."
),
"deepseek-coder": LLMModelConfig(
api_key="Your API key here.",
base_url="Base URL here."
),
"gpt-4": LLMModelConfig(
api_key="Your API key here.",
base_url="Base URL here."
),
"Qwen2.5-72B-Instruct": LLMModelConfig(
api_key="Your API key here.",
base_url="Base URL here."
),
"claude-3-5-sonnet-20240620": LLMModelConfig(
api_key="Your API key here.",
base_url="Base URL here."
),
}
You may customize the models from DeepSeek to other models by applying the corresponding API key in 'configs\llm_api_config.py'
TASK_MODELS = {
"planner": "gpt-4",
"coder": "gpt-4",
"summarizer": "claude-3-5-sonnet-20240620"
}
# Change the information in the config.py, then
$ python main.py
After applying the command, there should be a log file in the root directory which can be modified in the file 'LLM\call_llm_api\call_llm.py'
The generated scripts are also in the root directory, which named as 'res-Victory_Times-Test_Times-Planning_Round-Coding_Round'. When new experiments are conducted, the previously generated Python scripts will be REMOVED, so make sure you have store the results.