cncf / sandbox

Applications for Sandbox go here! ⏳📦🧪
Apache License 2.0
131 stars 21 forks source link

[Sandbox] Kubernetes AI Toolchain Operator (KAITO) #106

Open sdesai345 opened 3 months ago

sdesai345 commented 3 months ago

Application contact emails

sachidesai@microsoft.com, guofei@microsoft.com, ishaansehgal@microsoft.com, jpalma@microsoft.com, qike@microsoft.com

Project Summary

KAITO automates the deployment of AI models and associated infrastructure provisioning on a Kubernetes cluster

Project Description

The Kubernetes AI toolchain operator (KAITO) is a cloud-native Kubernetes operator that automates the deployment of language models in a cluster across available CPU and GPU resources. For inferencing and fine-tuning scenarios, KAITO selects optimally sized infrastructure for the model, as well as offers users the flexibility to switch to other available resource types. KAITO makes it easy to split inferencing for a range of preset models across multiple lower-GPU count VMs, significantly reducing maintenance costs and overall inference service setup time.

Org repo URL (provide if all repos under the org are in scope of the application)

N/A

Project repo URL in scope of application

https://github.com/Azure/kaito

Additional repos in scope of the application

No response

Website URL

https://github.com/Azure/kaito

Roadmap

https://github.com/orgs/Azure/projects/669

Roadmap context

No response

Contributing Guide

https://github.com/Azure/kaito/blob/main/docs/contributing/readme.md

Code of Conduct (CoC)

https://github.com/Azure/kaito/tree/main?tab=coc-ov-file

Adopters

No response

Contributing or Sponsoring Org

Microsoft Azure

Maintainers file

https://github.com/Azure/kaito/blob/main/CODEOWNERS

IP Policy

Trademark and accounts

Why CNCF?

The CNCF can provide KAITO with the ability to grow as a project and community of contributors. Across the CNAI and related working groups, members can extend KAITO to support more large language models for inferencing, improve fine-tuning capabilities, connect to a wider range of GPU infrastructure, and more. The CNCF encourages and cultivates a strong overlap between focus areas/working groups, particularly regarding scheduling, networking, infrastructure management, etc. Enhancements in cloud-native technologies can benefit AI/ML workloads. Given this interconnected nature of CNCF, several working groups can collaborate and grow KAITO to match the pace of AI growth in today's world.

Benefit to the Landscape

This project seeks to bridge the gap between AI application development and cloud-native technologies. KAITO serves as a tool to onboard and streamline containerized AI/ML workloads for cloud-native users - built upon open-source CNCF projects and extensible to pair with future CNCF projects. As the interest in ML inferencing and fine-tuning grows exponentially with the frequent release of high performance open source models, KAITO will help the CNCF community keep up, regardless of expertise in container orchestration or AI.

Cloud Native 'Fit'

KAITO fits into Automation and Configuration (Provisioning) of the Cloud Native landscape, as a Kubernetes operator that automates the deployment of containerized LLMs. KAITO has two main open-source components, a workspace controller that triggers node auto-provisioning and uses model preset configurations to create the inference workload, interacting with a gpu-provisioner controller to add GPUs onto a cluster from a given cloud provider.

Cloud Native 'Integration'

A major component of KAITO, the node provisioner controller, is built upon the machine custom resource definition (CRD) of the Karpenter project, to interact with workspace controller component and trigger the auto-provisioning of GPU nodes in a Kubernetes cluster.

Cloud Native Overlap

KAITO overlaps with Kubernetes, as the project is a Kubernetes operator following the established Kubernetes custom resource definition (CRD) and controller design pattern.

Similar projects

N/A

Landscape

No, this project is not yet listed in the CNCF landscape.

Business Product or Service to Project separation

Azure Kubernetes Service (AKS) has developed a managed add-on based on the KAITO project for AKS customers. This add-on is called the AI toolchain operator add-on, which automatically provisions Azure-managed GPU nodes when deploying AI workloads on AKS clusters. The Azure-managed AI toolchain operator add-on will follow a separate release cadence and will be compatible with AKS features, while the KAITO open-source project will be developed in collaboration with the AKS team and members of the Upstream Kubernetes community, to remain extensible across cloud providers and empower developers to leverage various GPU types for AI workloads.

Project presentations

We recently presented to the TAG app delivery on June 12 and TAG runtime on June 20, on KAITO and our roadmap. We plan to present to WG-artificial-intelligence on June 27 as well.

Project champions

@lachie83

Additional information

No response

raravena80 commented 3 months ago

TAG-Runtime

TheFoxAtWork commented 2 weeks ago

@srust @raravena80 @miao0miao. Does the tag have a recommendation? @lianmakesthings @thschue @roberthstrand Does the tag have a recommendation?

TheFoxAtWork commented 2 weeks ago

Questions for the project

sdesai345 commented 2 weeks ago

Please follow up with any further questions, thank you @TheFoxAtWork

raravena80 commented 1 week ago

TAG-Runtime's review/assessment

cathyhongzhang commented 5 days ago

This project customizes the deployment of AI models on the K8S cluster. It provides one way to specify AI model GPU resource requirements and fine-tune params. I would expect more alternative ways down the road.

jberkus commented 1 day ago

TAG Contributor strategy has reviewed this project and found the following:

This review is for the TOC’s information only. Sandbox projects are not required to have full governance or contributor documentation.