theroyallab / tabbyAPI

An OAI compatible exllamav2 API that's both lightweight and fast
GNU Affero General Public License v3.0
378 stars 55 forks source link

TabbyAPI

Python 3.10, 3.11, and 3.12 License: AGPL v3 Discord Server

Developer facing API documentation

Support on Ko-Fi

[!IMPORTANT]

In addition to the README, please read the Wiki page for information about getting started!

[!NOTE]

Need help? Join the Discord Server and get the Tabby role. Please be nice when asking questions.

A FastAPI based application that allows for generating text using an LLM (large language model) using the Exllamav2 backend

Disclaimer

This project is marked rolling release. There may be bugs and changes down the line. Please be aware that you might need to reinstall dependencies if needed.

TabbyAPI is a hobby project solely for a small amount of users. It is not meant to run on production servers. For that, please look at other backends that support those workloads.

Getting Started

[!IMPORTANT]

This README is not for getting started. Please read the Wiki.

Read the Wiki for more information. It contains user-facing documentation for installation, configuration, sampling, API usage, and so much more.

Supported Model Types

TabbyAPI uses Exllamav2 as a powerful and fast backend for model inference, loading, etc. Therefore, the following types of models are supported:

In addition, TabbyAPI supports parallel batching using paged attention for Nvidia Ampere GPUs and higher.

Alternative Loaders/Backends

If you want to use a different model type or quantization method than the ones listed above, here are some alternative backends with their own APIs:

Contributing

Use the template when creating issues or pull requests, otherwise the developers may not look at your post.

If you have issues with the project:

If you have a Pull Request

Developers and Permissions

Creators/Developers: