Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.
Kubeflow pipelines are reusable end-to-end ML workflows built using the Kubeflow Pipelines SDK.
The Kubeflow pipelines service has the following goals:
Kubeflow Pipelines can be installed as part of the Kubeflow Platform. Alternatively you can deploy Kubeflow Pipelines as a standalone service.
The Docker container runtime has been deprecated on Kubernetes 1.20+. Kubeflow Pipelines has switched to use Emissary Executor by default from Kubeflow Pipelines 1.8. Emissary executor is Container runtime agnostic, meaning you are able to run Kubeflow Pipelines on Kubernetes cluster with any Container runtimes.
Get started with your first pipeline and read further information in the Kubeflow Pipelines overview.
See the various ways you can use the Kubeflow Pipelines SDK.
See the Kubeflow Pipelines API doc for API specification.
Consult the Python SDK reference docs when writing pipelines using the Python SDK.
Before you start contributing to Kubeflow Pipelines, read the guidelines in How to Contribute. To learn how to build and deploy Kubeflow Pipelines from source code, read the developer guide.
The meeting is happening every other Wed 10-11AM (PST) Calendar Invite or Join Meeting Directly
Kubeflow pipelines uses Argo Workflows by default under the hood to orchestrate Kubernetes resources. The Argo community has been very supportive and we are very grateful. Additionally there is Tekton backend available as well. To access it, please refer to Kubeflow Pipelines with Tekton repository.