mlflow / mlflow-export-import

Apache License 2.0
139 stars 84 forks source link

MLflow Export Import

The MLflow Export Import package provides tools to copy MLflow objects (runs, experiments or registered models) from one MLflow tracking server (Databricks workspace) to another. Using the MLflow REST API, the tools export MLflow objects to an intermediate directory and then import them into the target tracking server.

For more details:

Last updated: 2024-05-10

High Level Architecture

Overview

Why use MLflow Export Import?

MLflow Export Import scenarios

Source tracking server Destination tracking server Note
Open source Open source common
Open source Databricks less common
Databricks Databricks common
Databricks Open source rare

MLflow Objects

These are the MLflow objects and their attributes that can be exported.

Object REST Python SQL
Run link link link
Experiment link link link
Registered Model link link link
Registered Model Version link link link

MLflow Export Import provides rudimentary capabilities for tracking lineage of the imported Mlflow objects by having the option save the original MLflow object attributes in the imported target environment. See README_governance.

Tools Overview

There are two dimensions to the MLflow Export Import tools:

Single and Bulk Tools

The two execution modes are:

Databricks notebooks simply invoke the corresponding Python classes.

Copy tools simply invoke the appropriate export and import on a temporary directory.

Copy Tools

See README_copy on how to copy model versions or runs.

Limitations

Due to the MLflow and Databricks API constraints, there are some limitations to the export/import process. See README_limitations.md.

Quick Start

Setup

pip install git+https:///github.com/mlflow/mlflow-export-import/#egg=mlflow-export-import

Export experiment

export MLFLOW_TRACKING_URI=http://localhost:5000

export-experiment \
  --experiment sklearn-wine \
  --output-dir /tmp/export

Import experiment

export MLFLOW_TRACKING_URI=http://localhost:5001

import-experiment \
  --experiment-name sklearn-wine \
  --input-dir /tmp/export

Setup

Supports python 3.8 and above.

Local setup

First create a virtual environment.

python -m venv mlflow-export-import
source mlflow-export-import/bin/activate

There are several different ways to install the package.

1. Install from github

Recommended.

pip install git+https:///github.com/mlflow/mlflow-export-import/#egg=mlflow-export-import

3. Install from a specific commit

pip install git+https:///github.com/mlflow/mlflow-export-import@a334f8003a3c9c3b9cd0173827be692a39355fd8

4. Install from github clone

git clone https://github.com/mlflow/mlflow-export-import
cd mlflow-export-import
pip install -e .

5. Install from PyPI

Legacy. Due to the quick turnaround time for bug ad feature fixes, this is deprecated.

Databricks notebook setup

Make sure your cluster has the latest MLflow and Databricks Runtime ML version installed.

There are two different ways to install the mlflow-export-import package in a Databricks notebook.

1. Install package in notebook

See documentation: Install notebook-scoped libraries with %pip.

The section above has other pip install alternatives you can use.

%pip install mlflow-export-import

2. Install package as a wheel on cluster

Build the wheel artifact, upload it to DBFS and then install it on your cluster.

git clone https://github.com/mlflow/mlflow-export-import
cd mlflow-export-import
python setup.py bdist_wheel
databricks fs cp dist/mlflow_export_import-1.0.0-py3-none-any.whl {MY_DBFS_PATH}

Laptop to Databricks usage

There are several ways to run the tools from your laptop against a Databricks workspace.

  1. With .~/databrickscfg and no profile specified. The host and token are picked up from the DEFAULT profile.
    export MLFLOW_TRACKING_URI=databricks
  2. Specify profile in ~/.databrickscfg.
    export MLFLOW_TRACKING_URI=databricks://MY_PROFILE
  3. To override ~/.databrickscfg values or without ~/.databrickscfg file.
    export MLFLOW_TRACKING_URI=databricks
    export DATABRICKS_HOST=https://myshard.cloud.databricks.com
    export DATABRICKS_TOKEN=MY_TOKEN

See the Databricks documentation page Access the MLflow tracking server from outside Databricks - AWS or Azure.

Running mlflow-export-import tools

The main tool scripts can be executed either as a standard Python script or console script.

Python console scripts are provided as a convenience. For a list of scripts see setup.py.

For example:

export-experiment --help

or:

python -u -m mlflow_export_import.experiment.export_experiment --help

Logging

Standard Python logging is used. A simple default logging config is provided. By default all output is sent to stdout using the console handler. There is an option to use a file handler to send output to a file.

Several environment variables can be set to customize your logging experience.

Default logging config:

Custom logging config file:

Examples:

export MLFLOW_EXPORT_IMPORT_LOG_CONFIG_FILE=/dbfs/mlflow_export_import/conf/log_config.yaml
export MLFLOW_EXPORT_IMPORT_LOG_OUTPUT_FILE=/dbfs/mlflow_export_import/logs/export_models.log
export MLFLOW_EXPORT_IMPORT_LOG_FORMAT="%(asctime)s-%(levelname)s - %(message)s"

Multithreading:

If you use the use-threads option on exports, you can use the threadName format option:

export MLFLOW_EXPORT_IMPORT_LOG_FORMAT="%(threadName)s-%(levelname)s-%(message)s"

Note that multithreading is experimental. Logging is currently not fully satisfactory as it is interspersed between threads.

Other

Testing

There are two types of tests : open source and Databricks tests. See tests/README for details.

README files