beam-cloud / beta9

The open-source serverless GPU container runtime.
https://docs.beta9.beam.cloud
GNU Affero General Public License v3.0
297 stars 12 forks source link
cuda distributed-computing fine-tuning generative-ai gpu large-language-models llm llm-inference ml-platform self-hosted

Logo

--- ### **✨ The Open-Source Serverless GPU Container Runtime ✨**

Documentation Join Slack Twitter Tests Passing

--- [English](https://github.com/beam-cloud/beta9/blob/master/README.md) | [简体中文](https://github.com/beam-cloud/beta9/blob/master/docs/zh/zh_cn/README.md) | [繁體中文](https://github.com/beam-cloud/beta9/blob/master/docs/zh/zh_cw/README.md) | [Türkçe](https://github.com/beam-cloud/beta9/blob/master/docs/tr/README.md) | [हिंदी](https://github.com/beam-cloud/beta9/blob/master/docs/in/README.md) | [Português (Brasil)](https://github.com/beam-cloud/beta9/blob/master/docs/pt/README.md) | [Italiano](https://github.com/beam-cloud/beta9/blob/master/docs/it/README.md) | [Español](https://github.com/beam-cloud/beta9/blob/master/docs/es/README.md) | [한국어](https://github.com/beam-cloud/beta9/blob/master/docs/kr/README.md) | [日本語](https://github.com/beam-cloud/beta9/blob/master/docs/jp/README.md) ---

Beta9

Beta9 makes it easy for developers to run serverless functions on cloud GPUs.

Features:

We use beta9 internally at Beam to run AI applications for users at scale.

Use-Cases

Serverless Inference Endpoints

Decorate Any Python Function

from beta9 import Image, endpoint

@endpoint(
    cpu=1,
    memory="16Gi",
    gpu="T4",
    image=Image(
        python_packages=[
            "vllm==0.4.1",
        ],  # These dependencies will be installed in your remote container
    ),
)
def predict():
    from vllm import LLM

    prompts = ["The future of AI is"]
    llm = LLM(model="facebook/opt-125m")
    output = llm.generate(prompts)[0]

    return {"prediction": output.outputs[0].text}

Deploy It to the Cloud

$ beta9 deploy app.py:predict --name llm-inference

=> Building image
=> Using cached image
=> Deploying endpoint
=> Deployed 🎉
=> Invocation details

curl -X POST 'https://app.beam.cloud/endpoint/llm-inference/v1' \
-H 'Authorization: Bearer [YOUR_AUTH_TOKEN]' \
-d '{}'

Fan-Out Workloads to Hundreds of Containers

from beta9 import function

# This decorator allows you to parallelize this function
# across multiple remote containers
@function(cpu=1, memory=128)
def square(i: int):
    return i**2

def main():
    numbers = list(range(100))
    squared = []

    # Run a remote container for every item in list
    for result in square.map(numbers):
        squared.append(result)

Enqueue Async Jobs

from beta9 import task_queue, Image

@task_queue(
    cpu=1.0,
    memory=128,
    gpu="T4",
    image=Image(python_packages=["torch"]),
    keep_warm_seconds=1000,
)
def multiply(x):
    result = x * 2
    return {"result": result}

# Manually insert task into the queue
multiply.put(x=10)

How It Works

Beta9 is designed for launching remote serverless containers quickly. There are a few things that make this possible:

demo gif

Get Started

Beam Cloud (Recommended)

The fastest and most reliable way to get started is by signing up for our managed service, Beam Cloud. Your first 10 hours of usage are free, and afterwards you pay based on usage.

Open-Source Deploy (Advanced)

You can run Beta9 locally, or in an existing Kubernetes cluster using our Helm chart.

Local Development

Setting Up the Server

k3d is used for local development. You'll need Docker and Make to get started.

To use our fully automated setup, run the setup make target.

[!NOTE] This will overwrite some of the tools you may already have installed. Review the setup.sh to learn more.

make setup

Setting Up the SDK

The SDK is written in Python. You'll need Python 3.8 or higher. Use the setup-sdk make target to get started.

[!NOTE] This will install the Poetry package manager.

make setup-sdk

Using the SDK

After you've setup the server and SDK, check out the SDK readme here.

Contributing

We welcome contributions, big or small! These are the most helpful things for us:

Community & Support

If you need support, you can reach out through any of these channels:

Thanks to Our Contributors