microsoft / WindowsAgentArena

Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.
https://microsoft.github.io/WindowsAgentArena
MIT License
255 stars 15 forks source link
agentic ai ai-agent ai-benchmark ai-research desktop-agent windows
![Banner](img/banner.png) [![Website](https://img.shields.io/badge/Website-red)](https://microsoft.github.io/WindowsAgentArena) [![arXiv](https://img.shields.io/badge/Paper-green)](https://arxiv.org/abs/2409.08264) [![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![PRs](https://img.shields.io/badge/AI-Podcast-blue.svg?logo=data:image/svg%2bxml;base64,PHN2ZyBmaWxsPSIjZmZmZmZmIiB2aWV3Qm94PSIwIDAgMjQgMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGcgaWQ9IlNWR1JlcG9fYmdDYXJyaWVyIiBzdHJva2Utd2lkdGg9IjAiPjwvZz48ZyBpZD0iU1ZHUmVwb190cmFjZXJDYXJyaWVyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjwvZz48ZyBpZD0iU1ZHUmVwb19pY29uQ2FycmllciI+PHBhdGggZD0iTTEzLDRWMjBhMSwxLDAsMCwxLTIsMFY0YTEsMSwwLDAsMSwyLDBaTTgsNUExLDEsMCwwLDAsNyw2VjE4YTEsMSwwLDAsMCwyLDBWNkExLDEsMCwwLDAsOCw1Wk00LDdBMSwxLDAsMCwwLDMsOHY4YTEsMSwwLDAsMCwyLDBWOEExLDEsMCwwLDAsNCw3Wk0xNiw1YTEsMSwwLDAsMC0xLDFWMThhMSwxLDAsMCwwLDIsMFY2QTEsMSwwLDAsMCwxNiw1Wm00LDJhMSwxLDAsMCwwLTEsMXY4YTEsMSwwLDAsMCwyLDBWOEExLDEsMCwwLDAsMjAsN1oiPjwvcGF0aD48L2c+PC9zdmc+)](https://microsoft.github.io/WindowsAgentArena/static/files/waa_podcast.wav)

Windows Agent Arena (WAA) 🪟 is a scalable Windows AI agent platform for testing and benchmarking multi-modal, desktop AI agents. WAA provides researchers and developers with a reproducible and realistic Windows OS environment for AI research, where agentic AI workflows can be tested across a diverse range of tasks.

WAA supports the deployment of agents at scale using the Azure ML cloud infrastructure, allowing for the parallel running of multiple agents and delivering quick benchmark results for hundreds of tasks in minutes, not days.

📢 Updates

📚 Citation

Our technical report paper can be found here. If you find this environment useful, please consider citing our work:

@article{bonatti2024windows,
author = { Bonatti, Rogerio and Zhao, Dan and Bonacci, Francesco and Dupont, Dillon, and Abdali, Sara and Li, Yinheng and Wagle, Justin and Koishida, Kazuhito and Bucker, Arthur and Jang, Lawrence and Hui, Zack},
title = {Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale},
institution = {Microsoft},
year = {2024},
month = {September}, 
}

☝️ Pre-requisites:

main

Clone the repository and install dependencies:

git clone https://github.com/microsoft/WindowsAgentArena.git
cd WindowsAgentArena
# Install the required dependencies in your python environment
# conda activate winarena
pip install -r requirements.txt

💻 Local deployment (WSL or Linux)

1. Configuration file

Create a new config.json at the root of the project with the necessary keys (from OpenAI or Azure endpoints):

{
    "OPENAI_API_KEY": "<OPENAI_API_KEY>", // if you are using OpenAI endpoint
    "AZURE_API_KEY": "<AZURE_API_KEY>",  // if you are using Azure endpoint
    "AZURE_ENDPOINT": "https://yourendpoint.openai.azure.com/", // if you are using Azure endpoint
}

2. Prepare the Windows Arena Docker image

To use the default docker image from Docker Hub:

docker pull windowsarena/winarena:latest

(Optional) 2.1 Build the Windows Arena Docker image locally:

To build your own image from scratch (optional):

cd scripts
./build-container-image.sh

For a list of parameters that can be changed during building of the docker images:

./build-container-image.sh --help

3. Prepare the Windows 11 image

3.1 Download Windows 11 Evaluation .iso file:

  1. Visit Microsoft Evaluation Center, accept the Terms of Service, and download a Windows 11 Enterprise Evaluation (90-day trial, English, United States) ISO file [~6GB]
  2. After downloading, rename the file to setup.iso and copy it to the directory WindowsAgentArena/src/win-arena-container/vm/image

3.2 Automatic Setup of the Windows 11 golden image:

Before running the arena, you need to prepare a new WAA snapshot (also referred as WAA golden image). This 30GB snapshot represents a fully functional Windows 11 VM with all the programs needed to run the benchmark. This VM additionally hosts a Python server which receives and executes agent commands. To learn more about the components at play, see our local and cloud components diagrams.

To prepare the gold snapshot, run once:

cd ./scripts
./run-local.sh --prepare-image true
Customizing resource allocation for the local run

By default, the run-local.sh script attempts to create a QEMU VM with 8 GB of RAM and 8 CPU cores. If your system has limited resources, you can override these defaults by specifying the desired RAM and CPU allocation:

./run-local.sh --prepare-image true --ram-size 4G --cpu-cores 4
Support for KVM acceleration

If your system does not support KVM acceleration, you can disable it by specifying the --use-kvm false flag:

./run-local.sh --use-kvm false

Note that running the benchmark locally without KVM acceleration is not recommended due to performance issues. In this case, we recommend preparing the golden image for later running the benchmark on Azure.

Monitoring the image preparation

You can check the VM install screen by accessing http://localhost:8006 in your browser (unless you have provided an alternate --browser-port parameter). The preparation process is fully automated and will take around 20 minutes.

Please do not interfere with the VM while it is being prepared. It will automatically shut down when the provisioning process is complete.

local_prepare_screen_unattend
local_prepare_screen_setup

At the end, you should expect the Docker container named winarena to gracefully terminate as shown from the below logs.

local_prepare_logs_successful


You will find the 30GB WAA golden image in WindowsAgentArena/src/win-arena-container/vm/storage, consisting of the following files:

run_local_prepare_storage_successful


Additional Notes

4. Deploying the agent in the arena

4.1 Running the base benchmark

The entire setup runs inside a docker container. The entry point for the agent is the src/win-arena-container/run.py script (copied to /client/run.py in the container). The Windows OS runs as a VM process inside the container, and they communicate via GET/POST. To run the entire setup at once, run:

cd scripts
./run-local.sh --start-client true

On your host, open your browser and go to http://localhost:8006 to see the Windows VM with the agent running.

For a list of parameters that can be changed:

./run-local.sh --help

At the end of the run you can display the results using the command:

cd src/win-arena-container/client
python show_results.py --result_dir <path_to_results_folder>

The table below provides a comparison of various combinations of hyperparameters used by the Navi agent in our study, which can be overridden by specifying --som-origin <som_origin> --a11y-backend <a11y_backend> when running the run-local.sh script:

Hyperparameter Possible Values Description Recommended Complementary Value
som_origin oss, a11y, mixed-oss Determines how the Set-of-Mark (SoM) is achieved. win32 for oss; uia for a11y, mixed-oss
mixed-oss If set to any "mixed" option, the agent partially relies on the accessibility tree for SoM entities. uia (more reliable but slower)
oss Uses webparse, groundingdino, and OCR (TesseractOCR) pipelines. win32 (faster performance)
a11y Relies on accessibility tree extraction for SoM. uia (more reliable but slower)
a11y_backend win32, uia Dictates how the accessibility tree should be extracted. win32 for oss; uia for a11y and mixed types
win32 Faster but less reliable accessibility tree extraction. Use with oss or non-"mixed" types.
uia Slower but more reliable accessibility tree extraction. Use with a11y, mixed-oss

4.2 Local development tips

At first sight it might seem challenging to develop/debug code running inside the docker container. However, we provide a few tips to make this process easier. Check the Development-Tips Doc for more details such as:

🌐 Azure Deployment -> Parallelizing the benchmark

We offer a seamless way to run the Windows Agent Arena on Azure ML Compute VMs. This option will significantly reduce the time needed to test your agent in all benchmark tasks from hours/days to minutes.

1. Set up the Azure resource group:

azure_create_ml
azure_ml_portal
azure_notebook
azure_quota

2. Uploading Windows 11 and Docker images to Azure

3. Environment configurations and deployment

Make sure you have installed the python requirements in your conda environment

conda activate winarena

pip install -r requirements.txt

From your activated conda environment:

cd scripts python run_azure.py --experiments_json "experiments.json"


For any unfinished experiments in `experiments.json`, the script will: 
1. Create `<num_workers` Azure Compute Instance VMs.
2. Run one ML Training Job named `<exp_name>` per VM.
3. Dispose the VMs once the jobs are completed.

The logs from the run will be saved in a `agent_outputs` folder in the same blob container where you uploaded the Windows 11 image. You can download the `agent_outputs` folder to your local machine and run the `show_azure.py` script to see the results from every experiment as a markdown table.

```bash
cd scripts
python show_azure.py --json_config "experiments.json" --result_dir <path_to_downloaded_agent_outputs_folder>

🤖 BYOA: Bring Your Own Agent

Want to test your own agents in Windows Agent Arena? You can use our default agent as a template and create your own folder under src/win-arena-container/client/mm_agents. You just need to make sure that your agent.py file features predict() and reset() functions. For more information on agent development check out the BYOA Doc.

👩‍💻 Open-source contributions

We welcome contributions to the Windows Agent Arena project. In particular, we welcome:

If you are interested in contributing, please check out our Task Development Guidelines.

❓ FAQ

What are approximate running times and costs for the benchmark?

Component Cost Time
Azure Standard_D8_v3 VM ~$8 ($0.38/h 40 0.5h)
GPT-4V $100 ~35min with 40 VMs
GPT-4o $100 ~35min with 40 VMs
GPT-4o-mini $15 ~30min with 40 VMs

👏 Acknowledgements

🤝 Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

🛡️ Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.