Calculates how much GPU memory you need and how much token/s you can get for any LLM & GPU/CPU.
Also breakdown of where it goes for training/inference with quantization (GGML/bitsandbytes/QLoRA) & inference frameworks (vLLM/llama.cpp/HF) supported
Link: https://rahulschand.github.io/gpu_poor/
For memory, output is total vRAM & its breakdown. It looks like below
{
"Total": 4000,
"KV Cache": 1000,
"Model Size": 2000,
"Activation Memory": 500,
"Grad & Optimizer memory": 0,
"cuda + other overhead": 500
}
For token/s, additional info looks like below
{
"Token per second": 50,
"ms per token": 20,
"Prompt process time (s)": 5 s,
"memory or compute bound?": Memory,
}
For training, output is time for each forward pass (in ms)
{
"ms per iteration (forward + backward)": 100,
"memory or compute bound?": Memory,
}
made this to check if you can run a particular LLM on your GPU. Useful to figure out the following
Finding which LLMs your GPU can handle isn't as easy as looking at the model size because during inference (KV cache) takes susbtantial amount of memory. For example, with sequence length 1000 on llama-2-7b it takes 1GB of extra memory (using hugginface LlamaForCausalLM, with exLlama & vLLM this is 500MB). And during training both KV cache & activations & quantization overhead take a lot of memory. For example, llama-7b with bnb int8 quant is of size ~7.5GB but it isn't possible to finetune it using LoRA on data with 1000 context length even with RTX 4090 24 GB. Which means an additional 16GB memory goes into quant overheads, activations & grad memory.
The results can vary depending on your model, input data, cuda version & what quant you are using & it is impossible to predict exact values. I have tried to take these into account & make sure the results are within 500MB. Below table I cross-check 3b,7b & 13b model memories given by the website vs. what what I get on my RTX 4090 & 2060 GPUs. All values are within 500MB.
Total memory = model size + kv-cache + activation memory + optimizer/grad memory + cuda etc. overhead
.bin
file size (divide it by 2 if Q8 quant & by 4 if Q4 quant).(2 x sequence length x hidden size)
per layer. For huggingface this (2 x 2 x sequence length x hidden size)
per layer. In training the whole sequence is processed at once (therefore KV cache memory = 0).backward()
. For example if you do output = Q * input
where Q = (dim, dim)
and input = (batch, seq, dim)
then output of shape (batch, seq, dim)
will need to be stored (in fp16). This consumes the most memory in LoRA/QLoRA. In LLMs there are many such intermediate steps (after Q,K,V and after attention, after norm, after FFN1, FFN2, FFN3, after skip layer ....) Around 15 intermediate representations are saved per layer. .grad
tensors & tensors associated with the optimizer (running avg
etc.)Sometimes the answers might be very wrong in which case please open an issue here & I will try to fix it.