The MLflow Export Import package provides tools to copy MLflow objects (runs, experiments or registered models) from one MLflow tracking server (Databricks workspace) to another. Using the MLflow REST API, the tools export MLflow objects to an intermediate directory and then import them into the target tracking server.
For more details:
Last updated: 2024-05-10
Source tracking server | Destination tracking server | Note |
---|---|---|
Open source | Open source | common |
Open source | Databricks | less common |
Databricks | Databricks | common |
Databricks | Open source | rare |
These are the MLflow objects and their attributes that can be exported.
Object | REST | Python | SQL |
---|---|---|---|
Run | link | link | link |
Experiment | link | link | link |
Registered Model | link | link | link |
Registered Model Version | link | link | link |
MLflow Export Import provides rudimentary capabilities for tracking lineage of the imported Mlflow objects by having the option save the original MLflow object attributes in the imported target environment. See README_governance.
There are two dimensions to the MLflow Export Import tools:
Single and Bulk Tools
The two execution modes are:
Single tools. Copy a single MLflow object between tracking servers.
These tools allow you to specify a different destination object name.
For example, if you want to clone the experiment /Mary/Experiments/Iris
under a new name, you can specify the target experiment name as /John/Experiments/Iris
.
Bulk tools. High-level tools to copy an entire tracking server or a collection of MLflow objects. There is no option to change destination object names. Full object referential integrity is maintained (e.g. an imported registered model version will point to the imported run that it refers to.
Databricks notebooks simply invoke the corresponding Python classes.
Copy tools simply invoke the appropriate export and import on a temporary directory.
See README_copy on how to copy model versions or runs.
Due to the MLflow and Databricks API constraints, there are some limitations to the export/import process. See README_limitations.md.
pip install git+https:///github.com/mlflow/mlflow-export-import/#egg=mlflow-export-import
export MLFLOW_TRACKING_URI=http://localhost:5000
export-experiment \
--experiment sklearn-wine \
--output-dir /tmp/export
export MLFLOW_TRACKING_URI=http://localhost:5001
import-experiment \
--experiment-name sklearn-wine \
--input-dir /tmp/export
Supports python 3.8 and above.
First create a virtual environment.
python -m venv mlflow-export-import
source mlflow-export-import/bin/activate
There are several different ways to install the package.
Recommended.
pip install git+https:///github.com/mlflow/mlflow-export-import/#egg=mlflow-export-import
pip install git+https:///github.com/mlflow/mlflow-export-import@a334f8003a3c9c3b9cd0173827be692a39355fd8
git clone https://github.com/mlflow/mlflow-export-import
cd mlflow-export-import
pip install -e .
Legacy. Due to the quick turnaround time for bug ad feature fixes, this is deprecated.
Make sure your cluster has the latest MLflow and Databricks Runtime ML version installed.
There are two different ways to install the mlflow-export-import package in a Databricks notebook.
See documentation: Install notebook-scoped libraries with %pip.
The section above has other pip install alternatives you can use.
%pip install mlflow-export-import
Build the wheel artifact, upload it to DBFS and then install it on your cluster.
git clone https://github.com/mlflow/mlflow-export-import
cd mlflow-export-import
python setup.py bdist_wheel
databricks fs cp dist/mlflow_export_import-1.0.0-py3-none-any.whl {MY_DBFS_PATH}
There are several ways to run the tools from your laptop against a Databricks workspace.
export MLFLOW_TRACKING_URI=databricks
export MLFLOW_TRACKING_URI=databricks://MY_PROFILE
export MLFLOW_TRACKING_URI=databricks
export DATABRICKS_HOST=https://myshard.cloud.databricks.com
export DATABRICKS_TOKEN=MY_TOKEN
See the Databricks documentation page Access the MLflow tracking server from outside Databricks
- AWS or Azure.
The main tool scripts can be executed either as a standard Python script or console script.
Python console scripts are provided as a convenience. For a list of scripts see setup.py.
For example:
export-experiment --help
or:
python -u -m mlflow_export_import.experiment.export_experiment --help
Standard Python logging is used. A simple default logging config is provided. By default all output is sent to stdout using the console handler. There is an option to use a file handler to send output to a file.
Several environment variables can be set to customize your logging experience.
Default logging config:
Custom logging config file:
Examples:
export MLFLOW_EXPORT_IMPORT_LOG_CONFIG_FILE=/dbfs/mlflow_export_import/conf/log_config.yaml
export MLFLOW_EXPORT_IMPORT_LOG_OUTPUT_FILE=/dbfs/mlflow_export_import/logs/export_models.log
export MLFLOW_EXPORT_IMPORT_LOG_FORMAT="%(asctime)s-%(levelname)s - %(message)s"
If you use the use-threads
option on exports, you can use the threadName
format option:
export MLFLOW_EXPORT_IMPORT_LOG_FORMAT="%(threadName)s-%(levelname)s-%(message)s"
Note that multithreading is experimental. Logging is currently not fully satisfactory as it is interspersed between threads.
There are two types of tests : open source and Databricks tests. See tests/README for details.