[!IMPORTANT]
In addition to the README, please read the Wiki page for information about getting started!
[!NOTE]
Need help? Join the Discord Server and get the
Tabby
role. Please be nice when asking questions.
A FastAPI based application that allows for generating text using an LLM (large language model) using the Exllamav2 backend
TabbyAPI is also the official API backend server for ExllamaV2.
This project is marked as rolling release. There may be bugs and changes down the line. Please be aware that you might need to reinstall dependencies if needed.
TabbyAPI is a hobby project made for a small amount of users. It is not meant to run on production servers. For that, please look at other solutions that support those workloads.
[!IMPORTANT]
This README does not have instructions for setting up. Please read the Wiki.
Read the Wiki for more information. It contains user-facing documentation for installation, configuration, sampling, API usage, and so much more.
And much more. If something is missing here, PR it in!
TabbyAPI uses Exllamav2 as a powerful and fast backend for model inference, loading, etc. Therefore, the following types of models are supported:
Exl2 (Highly recommended)
GPTQ
FP16 (using Exllamav2's loader)
In addition, TabbyAPI supports parallel batching using paged attention for Nvidia Ampere GPUs and higher.
Use the template when creating issues or pull requests, otherwise the developers may not look at your post.
If you have issues with the project:
Describe the issue in detail
If you have a feature request, please indicate it as such.
If you have a Pull Request
TabbyAPI would not exist without the work of other contributors and FOSS projects:
Creators/Developers: