microsoft / MLOS

MLOS is a project to enable autotuning for systems.
https://microsoft.github.io/MLOS
MIT License
141 stars 68 forks source link
autotuning benchmarking benchmarking-framework data-science infrastructure optimize-systems performance-engineering

MLOS

MLOS DevContainer MLOS Linux MLOS MacOS MLOS Windows Code Coverage Status

MLOS is a project to enable autotuning for systems.

Contents

Overview

MLOS currently focuses on an offline tuning approach, though we intend to add online tuning in the future.

To accomplish this, the general flow involves

optimization loop

Source: LlamaTune: VLDB 2022

For a brief overview of some of the features and capabilities of MLOS, please see the following video:

demo video

Organization

To do this this repo provides two Python modules, which can be used independently or in combination:

For these design requirements we intend to reuse as much from existing OSS libraries as possible and layer policies and optimizations specifically geared towards autotuning systems over top.

By providing wrappers we aim to also allow more easily experimenting with replacing underlying optimizer components as new techniques become available or seem to be a better match for certain systems.

Contributing

See CONTRIBUTING.md for details on development environment and contributing.

Getting Started

The development environment for MLOS uses conda and devcontainers to ease dependency management, but not all these libraries are required for deployment.

For instructions on setting up the development environment please try one of the following options:

conda activation

  1. Create the mlos Conda environment.

    conda env create -f conda-envs/mlos.yml

    See the conda-envs/ directory for additional conda environment files, including those used for Windows (e.g. mlos-windows.yml).

    or

    # This will also ensure the environment is update to date using "conda env update -f conda-envs/mlos.yml"
    make conda-env

    Note: the latter expects a *nix environment.

  2. Initialize the shell environment.

    conda activate mlos

Usage Examples

mlos-core

For an example of using the mlos_core optimizer APIs run the BayesianOptimization.ipynb notebook.

mlos-bench

For an example of using the mlos_bench tool to run an experiment, see the mlos_bench Quickstart README.

Here's a quick summary:

./scripts/generate-azure-credentials-config > global_config_azure.jsonc

# run a simple experiment
mlos_bench --config ./mlos_bench/mlos_bench/config/cli/azure-redis-1shot.jsonc

See Also:

mlos-viz

For a simple example of using the mlos_viz module to visualize the results of an experiment, see the sqlite-autotuning repository, especially the mlos_demo_sqlite_teachers.ipynb notebook.

Installation

The MLOS modules are published to pypi when new releases are tagged:

To install the latest release, simply run:

# this will install just the optimizer component with SMAC support:
pip install -U mlos-core[smac]

# this will install just the optimizer component with flaml support:
pip install -U "mlos-core[flaml]"

# this will install just the optimizer component with smac and flaml support:
pip install -U "mlos-core[smac,flaml]"

# this will install both the flaml optimizer and the experiment runner with azure support:
pip install -U "mlos-bench[flaml,azure]"

# this will install both the smac optimizer and the experiment runner with ssh support:
pip install -U "mlos-bench[smac,ssh]"

# this will install the postgres storage backend for mlos-bench
# and mlos-viz for visualizing results:
pip install -U "mlos-bench[postgres]" mlos-viz

Details on using a local version from git are available in CONTRIBUTING.md.

See Also

Examples

These can be used as starting points for new autotuning projects outside of the main MLOS repository if you want to keep your tuning experiment configs separate from the MLOS codebase.

Alternatively, we accept PRs to add new examples to the main MLOS repository! See mlos_bench/config and CONTRIBUTING.md for more details.

Publications