autotraderuk / fastapi-mlflow

Deploy mlflow models as JSON APIs with minimal new code
Apache License 2.0
19 stars 4 forks source link
api fastapi mlflow mlops python python3

fastapi mlflow

Deploy mlflow models as JSON APIs using FastAPI with minimal new code.

Installation

pip install fastapi-mlflow

For running the app in production, you will also need an ASGI server, such as Uvicorn or Hypercorn.

Install on Apple Silicon (ARM / M1)

If you experience problems installing on a newer generation Apple silicon based device, this solution from StackOverflow before retrying install has been found to help.

brew install openblas gfortran
export OPENBLAS="$(brew --prefix openblas)"

License

Copyright © 2022-23 Auto Trader Group plc.

Apache-2.0

Examples

Simple

Create

Create a file main.py containing:

from fastapi_mlflow.applications import build_app
from mlflow.pyfunc import load_model

model = load_model("/Users/me/path/to/local/model")
app = build_app(model)

Run

Run the server with:

uvicorn main:app

Check

Open your browser at http://127.0.0.1:8000/docs

You should see the automatically generated docs for your model, and be able to test it out using the Try it out button in the UI.

Serve multiple models

It should be possible to host multiple models (assuming that they have compatible dependencies...) by leveraging FastAPIs Sub Applications:

from fastapi import FastAPI
from fastapi_mlflow.applications import build_app
from mlflow.pyfunc import load_model

app = FastAPI()

model1 = load_model("/Users/me/path/to/local/model1")
model1_app = build_app(model1)
app.mount("/model1", model1_app)

model2 = load_model("/Users/me/path/to/local/model2")
model2_app = build_app(model2)
app.mount("/model2", model2_app)

Run and Check as above.

Custom routing

If you want more control over where and how the prediction end-point is mounted in your API, you can build the predictor function directly and use it as you need:

from inspect import signature

from fastapi import FastAPI
from fastapi_mlflow.predictors import build_predictor
from mlflow.pyfunc import load_model

model = load_model("/Users/me/path/to/local/model")
predictor = build_predictor(model)
app = FastAPI()
app.add_api_route(
    "/classify",
    predictor,
    response_model=signature(predictor).return_annotation,
    methods=["POST"],
)

Run and Check as above.