fastLLaMa
is an experimental high-performance framework designed to tackle the challenges associated with deploying large language models (LLMs) in production environments.
It offers a user-friendly Python interface to a C++ library, llama.cpp, enabling developers to create custom workflows, implement adaptable logging, and seamlessly switch contexts between sessions. This framework is geared towards enhancing the efficiency of operating LLMs at scale, with ongoing development focused on introducing features such as optimized cold boot times, Int4 support for NVIDIA GPUs, model artifact management, and multiple programming language support.
___ __ _ _ __ __
| | '___ ___ _| |_ | | | | ___ | \ \ ___
| |-<_> |<_-< | | | |_ | |_ <_> || |<_> |
|_| <___|/__/ |_| |___||___|<___||_|_|_|<___|
.+*+-.
-%#--
:=***%*++=.
:+=+**####%+
++=+*%#
.*+++==-
::--:. .**++=::
#%##*++=...... =*+==-::
.@@@*@%*==-==-==---:::::------::==*+==--::
%@@@@+--====+===---=---==+=======+++----:
.%@@*++*##***+===-=====++++++*++*+====++.
:@@%*##%@@%#*%#+==++++++=++***==-=+==+=-
%@%%%%%@%#+=*%*##%%%@###**++++==--==++
#@%%@%@@##**%@@@%#%%%%**++*++=====-=*-
-@@@@@@@%*#%@@@@@@@%%%%#+*%#++++++=*+.
+@@@@@%%*-#@@@@@@@@@@@%%@%**#*#+=-.
#%%###%: ..+#%@@@@%%@@@@%#+-
:***#*- ... *@@@%*+:
=***= -@%##**.
:#*++ -@#-:*=.
=##- .%*..##
+*- *: +-
:+- :+ =.
=-. *+ =-
:-:- =-- :::
aio_read
for posix.io_uring
.CMake
For Linux: \
sudo apt-get -y install cmake
For OS X: \
brew install cmake
For Windows \ Download cmake-*.exe installer from Download page and run it.
GCC 11 or greater
Minimum C++ 17
Python 3.x
To install fastLLaMa
through pip use
pip install git+https://github.com/PotatoSpudowski/fastLLaMa.git@main
To import fastLLaMa just run
from fastllama import Model
MODEL_PATH = "./models/7B/ggml-model-q4_0.bin"
model = Model(
path=MODEL_PATH, #path to model
num_threads=8, #number of threads to use
n_ctx=512, #context size of model
last_n_size=64, #size of last n tokens (used for repetition penalty) (Optional)
seed=0, #seed for random number generator (Optional)
n_batch=128, #batch size (Optional)
use_mmap=False, #use mmap to load model (Optional)
)
prompt = """Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
User: Hello, Bob.
Bob: Hello. How may I help you today?
User: Please tell me the largest city in Europe.
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
User: """
res = model.ingest(prompt, is_system_prompt=True) #ingest model with prompt
def stream_token(x: str) -> None:
"""
This function is called by the library to stream tokens
"""
print(x, end='', flush=True)
res = model.generate(
num_tokens=100,
top_p=0.95, #top p sampling (Optional)
temp=0.8, #temperature (Optional)
repeat_penalty=1.0, #repetition penalty (Optional)
streaming_fn=stream_token, #streaming function
stop_words=["User:", "\n"] #stop generation when this word is encountered (Optional)
)
model = Model(
path=MODEL_PATH, #path to model
num_threads=8, #number of threads to use
n_ctx=512, #context size of model
last_n_size=64, #size of last n tokens (used for repetition penalty) (Optional)
seed=0, #seed for random number generator (Optional)
n_batch=128, #batch size (Optional)
load_parallel=True
)
To cache the session, you can use the save_state
method.
res = model.save_state("./models/fast_llama.bin")
To load the session, use the load_state
method.
res = model.load_state("./models/fast_llama.bin")
To reset the session use the reset
method.
model.reset()
To attach LoRA Adapter during runtime use the attach_lora
method.
LORA_ADAPTER_PATH = "./models/ALPACA-7B-ADAPTER/ggml-adapter-model.bin"
model.attach_lora(LORA_ADAPTER_PATH)
Note: It is a good idea to reset the state of the model after attaching a LoRA Adapter.
To detach LoRA Adapter during runtime use the detach_lora
method.
model.detach_lora()
To caculate the perplexity, use the perplexity
method.
with open("test.txt", "r") as f:
data = f.read(8000)
total_perplexity = model.perplexity(data)
print(f"Total Perplexity: {total_perplexity:.4f}")
To get the embeddings of the model, use the get_embeddings
method.
embeddings = model.get_embeddings()
To get the logits of the model, use the get_logits
method.
logits = model.get_logits()
from fastLLaMa import Logger
class MyLogger(Logger):
def __init__(self):
super().__init__()
self.file = open("logs.log", "w")
def log_info(self, func_name: str, message: str) -> None:
#Modify this to do whatever you want when you see info logs
print(f"[Info]: Func('{func_name}') {message}", flush=True, end='', file=self.file)
pass
def log_err(self, func_name: str, message: str) -> None:
#Modify this to do whatever you want when you see error logs
print(f"[Error]: Func('{func_name}') {message}", flush=True, end='', file=self.file)
def log_warn(self, func_name: str, message: str) -> None:
#Modify this to do whatever you want when you see warning logs
print(f"[Warn]: Func('{func_name}') {message}", flush=True, end='', file=self.file)
For more clarity, check the examples/python/
folder.
# obtain the original LLaMA model weights and place them in ./models
ls ./models
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
# convert the 7B model to ggml FP16 format
# python [PythonFile] [ModelPath] [Floattype] [Vocab Only] [SplitType]
python3 scripts/convert-pth-to-ggml.py models/7B/ 1 0
# quantize the model to 4-bits
./build/src/quantize models/7B/ggml-model-f16.bin models/7B/ggml-model-q4_0.bin 2
# run the inference
#Run the scripts from the root dir of the project for now!
python ./examples/python/example.py
# Before running this command
# You need to provide the HF model paths here
python ./scripts/export-from-huggingface.py
# Alternatively you can just download the ggml models from huggingface directly and run them!
python3 ./scripts/convert-pth-to-ggml.py models/ALPACA-LORA-7B 1 0
./build/src/quantize models/ALPACA-LORA-7B/ggml-model-f16.bin models/ALPACA-LORA-7B/alpaca-lora-q4_0.bin 2
python ./examples/python/example-alpaca.py
# Download lora adapters and paste them inside models folder
# https://huggingface.co/tloen/alpaca-lora-7b
python scripts/convert-lora-to-ggml.py models/ALPACA-7B-ADAPTER/ -t fp32
# Change -t to fp16 to use fp16 weights
# Inorder to use LoRA adapters without caching, pass the --no-cache flag
# - Only supported for fp32 adapter weights
python examples/python/example-lora-adapter.py
# Make sure to set paths correctly for the base model and adapter inside the example
# Commands:
# load_lora: Attaches the adapter to the base model
# unload_lora: Deattaches the adapter (Deattach for fp16 is yet to be added!)
# reset: Resets the model state
To run the WebSocket Server and the WebUI, Follow the instructions on the respective branches.
As the models are currently fully loaded into memory, you will need adequate disk space to save them and sufficient RAM to load them. At the moment, memory and disk requirements are the same.
model size | original size | quantized size (4-bit) |
---|---|---|
7B | 13 GB | 3.9 GB |
13B | 24 GB | 7.8 GB |
30B | 60 GB | 19.5 GB |
65B | 120 GB | 38.5 GB |
Info: Run time may require extra memory during inference!\ (Depends on hyperparmeters used during model initialization)