YeonwooSung / MLOps

Miscellaneous codes and writings for MLOps
GNU General Public License v3.0
11 stars 1 forks source link

build(deps): bump the pip group group in /model-vcs/mlflow/sklearn_mlflow with 1 update #78

Closed dependabot[bot] closed 8 months ago

dependabot[bot] commented 8 months ago

Bumps the pip group group in /model-vcs/mlflow/sklearn_mlflow with 1 update: mlflow.

Updates mlflow from 2.9.2 to 2.10.0

Release notes

Sourced from mlflow's releases.

MLflow 2.10.0

In MLflow 2.10, we're introducing a number of significant new features that are preparing the way for current and future enhanced support for Deep Learning use cases, new features to support a broadened support for GenAI applications, and some quality of life improvements for the MLflow Deployments Server (formerly the AI Gateway).

New MLflow Website

We have a new home. The new site landing page is fresh, modern, and contains more content than ever. We're adding new content and blogs all of the time.

Model Signature Supports Objects and Arrays (#9936, @​serena-ruan)

Objects and Arrays are now available as configurable input and output schema elements. These new types are particularly useful for GenAI-focused flavors that can have complex input and output types. See the new Signature and Input Example documentation to learn more about how to use these new signature types.

Langchain Autologging (#10801, @​serena-ruan)

LangChain has autologging support now! When you invoke a chain, with autologging enabled, we will automatically log most chain implementations, recording and storing your configured LLM application for you. See the new Langchain documentation to learn more about how to use this feature.

Prompt Templating for Transformers Models (#10791, @​daniellok-db)

The MLflow transformers flavor now supports prompt templates. You can now specify an application-specific set of instructions to submit to your GenAI pipeline in order to simplify, streamline, and integrate sets of system prompts to be supplied with each input request. Check out the updated guide to transformers to learn more and see examples!

MLflow Deployments Server Enhancement (#10765, @​gabrielfu; #10779, @​TomeHirata)

The MLflow Deployments Server now supports two new requested features: (1) OpenAI endpoints that support streaming responses. You can now configure an endpoint to return realtime responses for Chat and Completions instead of waiting for the entire text contents to be completed. (2) Rate limits can now be set per endpoint in order to help control cost overrun when using SaaS models.

Further Document Improvements

Continued the push for enhanced documentation, guides, tutorials, and examples by expanding on core MLflow functionality (Deployments, Signatures, and Model Dependency management), as well as entirely new pages for GenAI flavors. Check them out today!

Other Features:

  • [Models] Enhance the MLflow Models predict API to serve as a pre-logging validator of environment compatibility. (#10759, @​B-Step62)
  • [Models] Add support for Image Classification pipelines within the transformers flavor (#10538, @​KonakanchiSwathi)
  • [Models] Add support for retrieving and storing license files for transformers models (#10871, @​BenWilson2)
  • [Models] Add support for model serialization in the Visual NLP format for JohnSnowLabs flavor (#10603, @​C-K-Loan)
  • [Models] Automatically convert OpenAI input messages to LangChain chat messages for pyfunc predict (#10758, @​dbczumar)
  • [Tracking] Enhance async logging functionality by ensuring flush is called on Futures objects (#10715, @​chenmoneygithub)
  • [Tracking] Add support for a non-interactive mode for the login() API (#10623, @​henxing)
  • [Scoring] Allow MLflow model serving to support direct dict inputs with the messages key (#10742, @​daniellok-db, @​B-Step62)
  • [Deployments] Add streaming support to the MLflow Deployments Server for OpenAI streaming return compatible routes (#10765, @​gabrielfu)
  • [Deployments] Add support for directly interfacing with OpenAI via the MLflow Deployments server (#10473, @​prithvikannan)
  • [UI] Introduce a number of new features for the MLflow UI (#10864, @​daniellok-db)
  • [Server-infra] Add an environment variable that can disallow HTTP redirects (#10655, @​daniellok-db)
  • [Artifacts] Add support for Multipart Upload for Azure Blob Storage (#10531, @​gabrielfu)

Bug fixes

  • [Models] Add deduplication logic for pip requirements and extras handling for MLflow models (#10778, @​BenWilson2)
  • [Models] Add support for paddle 2.6.0 release (#10757, @​WeichenXu123)
  • [Tracking] Fix an issue with an incorrect retry default timeout for urllib3 1.x (#10839, @​BenWilson2)
  • [Recipes] Fix an issue with MLflow Recipes card display format (#10893, @​WeichenXu123)
  • [Java] Fix an issue with metadata collection when using Streaming Sources on certain versions of Spark where Delta is the source (#10729, @​daniellok-db)

... (truncated)

Changelog

Sourced from mlflow's changelog.

2.10.0 (2024-01-26)

MLflow 2.10.0 includes several major features and improvements

In MLflow 2.10, we're introducing a number of significant new features that are preparing the way for current and future enhanced support for Deep Learning use cases, new features to support a broadened support for GenAI applications, and some quality of life improvements for the MLflow Deployments Server (formerly the AI Gateway).

Our biggest features this release are:

  • We have a new home. The new site landing page is fresh, modern, and contains more content than ever. We're adding new content and blogs all of the time.

  • Objects and Arrays are now available as configurable input and output schema elements. These new types are particularly useful for GenAI-focused flavors that can have complex input and output types. See the new Signature and Input Example documentation to learn more about how to use these new signature types.

  • LangChain has autologging support now! When you invoke a chain, with autologging enabled, we will automatically log most chain implementations, recording and storing your configured LLM application for you. See the new Langchain documentation to learn more about how to use this feature.

  • The MLflow transformers flavor now supports prompt templates. You can now specify an application-specific set of instructions to submit to your GenAI pipeline in order to simplify, streamline, and integrate sets of system prompts to be supplied with each input request. Check out the updated guide to transformers to learn more and see examples!

  • The MLflow Deployments Server now supports two new requested features: (1) OpenAI endpoints that support streaming responses. You can now configure an endpoint to return realtime responses for Chat and Completions instead of waiting for the entire text contents to be completed. (2) Rate limits can now be set per endpoint in order to help control cost overrun when using SaaS models.

  • Continued the push for enhanced documentation, guides, tutorials, and examples by expanding on core MLflow functionality (Deployments, Signatures, and Model Dependency management), as well as entirely new pages for GenAI flavors. Check them out today!

Features:

  • [Models] Introduce Objects and Arrays support for model signatures (#9936, @​serena-ruan)
  • [Models] Support saving prompt templates for transformers (#10791, @​daniellok-db)
  • [Models] Enhance the MLflow Models predict API to serve as a pre-logging validator of environment compatibility. (#10759, @​B-Step62)
  • [Models] Add support for Image Classification pipelines within the transformers flavor (#10538, @​KonakanchiSwathi)
  • [Models] Add support for retrieving and storing license files for transformers models (#10871, @​BenWilson2)
  • [Models] Add support for model serialization in the Visual NLP format for JohnSnowLabs flavor (#10603, @​C-K-Loan)
  • [Models] Automatically convert OpenAI input messages to LangChain chat messages for pyfunc predict (#10758, @​dbczumar)
  • [Tracking] Add support for Langchain autologging (#10801, @​serena-ruan)
  • [Tracking] Enhance async logging functionality by ensuring flush is called on Futures objects (#10715, @​chenmoneygithub)
  • [Tracking] Add support for a non-interactive mode for the login() API (#10623, @​henxing)
  • [Scoring] Allow MLflow model serving to support direct dict inputs with the messages key (#10742, @​daniellok-db, @​B-Step62)
  • [Deployments] Add streaming support to the MLflow Deployments Server for OpenAI streaming return compatible routes (#10765, @​gabrielfu)
  • [Deployments] Add the ability to set rate limits on configured endpoints within the MLflow deployments server API (#10779, @​TomeHirata)
  • [Deployments] Add support for directly interfacing with OpenAI via the MLflow Deployments server (#10473, @​prithvikannan)
  • [UI] Introduce a number of new features for the MLflow UI (#10864, @​daniellok-db)
  • [Server-infra] Add an environment variable that can disallow HTTP redirects (#10655, @​daniellok-db)
  • [Artifacts] Add support for Multipart Upload for Azure Blob Storage (#10531, @​gabrielfu)

Bug fixes:

  • [Models] Add deduplication logic for pip requirements and extras handling for MLflow models (#10778, @​BenWilson2)
  • [Models] Add support for paddle 2.6.0 release (#10757, @​WeichenXu123)
  • [Tracking] Fix an issue with an incorrect retry default timeout for urllib3 1.x (#10839, @​BenWilson2)
  • [Recipes] Fix an issue with MLflow Recipes card display format (#10893, @​WeichenXu123)
  • [Java] Fix an issue with metadata collection when using Streaming Sources on certain versions of Spark where Delta is the source (#10729, @​daniellok-db)
  • [Scoring] Fix an issue where SageMaker tags were not propagating correctly (#9310, @​clarkh-ncino)
  • [Windows / Databricks] Fix an issue with executing Databricks run commands from within a Window environment (#10811, @​wolpl)
  • [Models / Databricks] Disable mlflowdbfs mounts for JohnSnowLabs flavor due to flakiness (#9872, @​C-K-Loan)

... (truncated)

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/YeonwooSung/MLOps/network/alerts).
dependabot[bot] commented 8 months ago

Superseded by #79.