exo-explore / exo

Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚
GNU General Public License v3.0
6.56k stars 342 forks source link

[BOUNTY - $300] Add support for quantized models with tinygrad #148

Open AlexCheema opened 1 month ago

AlexCheema commented 1 month ago
barsuna commented 2 weeks ago

(I'm not in any way positioned to implement the quantization support, but wanted to share some notes with those planning to work on it)

background: I thought tinygrad example already has some quantization support, how hard could it be to get it over to exo :) So i copied over the int8 and nf4 code, updated the create_transformer functions etc - and indeed can see that it sort of works conceptually (tried on both llama3.1. 8B and 70B).

But a few things need to be sorted for this to be usable (and user-friendly)

For documentation purposes, to run llama3.1 70B at NF4, i've used 3 hosts with 64GB GPU ram between them and the model fits only just. Looks like NF4 is skipping many layers etc - so even quantized 70B model is still quite large.

Resulting tokens / sec was very low: at about 0.5 TPS I think (but i had to also disable JIT on tinygrad else some GPUs were throwing errors - so perhaps the performance in not representative). As reference point I'm able to get >7 tokens/sec if put 3 of these GPUs into 1 machine and run with llama.cpp). On CPU inference on same hardware is ~0.8. But again providing these numbers just for reference, performance discussion is obviosly premature at this point.

For the record, the command lines for each node:

JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 GPU_MEM_MB=28000 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 1111 --quantize nf4 --node-port 10001 --discovery-timeout 3600

JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 CUDA_VISIBLE_DEVICES=0 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 2222 --quantize nf4 --node-port 10002 --discovery-timeout 3600 --broadcast-port 5680

JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 CUDA_VISIBLE_DEVICES=1 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 3333 --quantize nf4 --node-port 10003 --chatgpt-api-port 7999 --discovery-timeout 3600 --broadcast-port 5679 --listen-port 10003

JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 GPU_MEM_MB=12000 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 4444 --quantize nf4 --node-port 10004 --discovery-timeout 3600 --broadcast-port 5681

(ASSISTED_DISCOVERY and GPU_MEM_MB are the modification made for points 2 and 3 above)

  _____  _____  
 / _ \ \/ / _ \ 
|  __/>  < (_) |
 \___/_/\_\___/ 

Detected system: Linux
Using inference engine: TinygradDynamicShardInferenceEngine with shard downloader: HFShardDownloader
Chat interface started:
 - http://127.0.0.1:8000
 - http://192.168.0.210:8000
ChatGPT API endpoint served at:
 - http://127.0.0.1:8000/v1/chat/completions
 - http://192.168.0.210:8000/v1/chat/completions
...
Removing download task for Shard(model_id='NousResearch/Meta-Llama-3.1-70B-Instruct', start_layer=0, end_layer=33, n_layers=80): True
ram used: 17.56 GB, freqs_cis                                         : 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1284/1284 [00:00<00:00, 62425.23it/s]
loaded weights in  22.22 ms, 0.00 GB loaded at 0.00 GB/s
Hello
...
Hello! How can I assist you today?<|eot_id|>
AlexCheema commented 1 week ago

(I'm not in any way positioned to implement the quantization support, but wanted to share some notes with those planning to work on it)

background: I thought tinygrad example already has some quantization support, how hard could it be to get it over to exo :) So i copied over the int8 and nf4 code, updated the create_transformer functions etc - and indeed can see that it sort of works conceptually (tried on both llama3.1. 8B and 70B).

But a few things need to be sorted for this to be usable (and user-friendly)

  • Currently mapping partitions/shards is done at layer granularity and it seems that exo does not take into account the particular size of the layer. For large models (e.g. 70B and larger - that leads to large rounding errors and if GPU space available to the whole cluster is very tight - there are GPUs with free memory + there GPUs with memory loaded to the brim (or OOM). I had temporarily worked it around by overriding memory discovery to manually specify how much memory is available where, but this is clearly not a solution. This should also take care of the fact that applying quantization is not uniform - some layers shrink, others do not.
  • Related to above - even if quantizing models to 4 bits, the tinygrad 1st loads the model as is (with 16 bit weights) and those 16bit weights still stay in memory somehow (even though i'm not sure they are used for anything, they are not gc'ed) - so memory consumption on each host is very large: ~64GB of RAM for 24GB worth of GPU space - this doesn't change between default, int8, nf4... Possible solution here is to save quantized model and load already quntized weights (this is what llama.cpp does for ex.)
  • Indirectly related: if one has >1 GPU in a host, we need some discovery update such that one can run >2 instances including 2 on the same host. Many large GPU machines have 8 GPUs, so may need as many as 8 instances per host.

For documentation purposes, to run llama3.1 70B at NF4, i've used 3 hosts with 64GB GPU ram between them and the model fits only just. Looks like NF4 is skipping many layers etc - so even quantized 70B model is still quite large.

Resulting tokens / sec was very low: at about 0.5 TPS I think (but i had to also disable JIT on tinygrad else some GPUs were throwing errors - so perhaps the performance in not representative). As reference point I'm able to get >7 tokens/sec if put 3 of these GPUs into 1 machine and run with llama.cpp). On CPU inference on same hardware is ~0.8. But again providing these numbers just for reference, performance discussion is obviosly premature at this point.

For the record, the command lines for each node:

JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 GPU_MEM_MB=28000 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 1111 --quantize nf4 --node-port 10001 --discovery-timeout 3600

JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 CUDA_VISIBLE_DEVICES=0 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 2222 --quantize nf4 --node-port 10002 --discovery-timeout 3600 --broadcast-port 5680

JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 CUDA_VISIBLE_DEVICES=1 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 3333 --quantize nf4 --node-port 10003 --chatgpt-api-port 7999 --discovery-timeout 3600 --broadcast-port 5679 --listen-port 10003

JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 GPU_MEM_MB=12000 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 4444 --quantize nf4 --node-port 10004 --discovery-timeout 3600 --broadcast-port 5681

(ASSISTED_DISCOVERY and GPU_MEM_MB are the modification made for points 2 and 3 above)

  _____  _____  
 / _ \ \/ / _ \ 
|  __/>  < (_) |
 \___/_/\_\___/ 

Detected system: Linux
Using inference engine: TinygradDynamicShardInferenceEngine with shard downloader: HFShardDownloader
Chat interface started:
 - http://127.0.0.1:8000
 - http://192.168.0.210:8000
ChatGPT API endpoint served at:
 - http://127.0.0.1:8000/v1/chat/completions
 - http://192.168.0.210:8000/v1/chat/completions
...
Removing download task for Shard(model_id='NousResearch/Meta-Llama-3.1-70B-Instruct', start_layer=0, end_layer=33, n_layers=80): True
ram used: 17.56 GB, freqs_cis                                         : 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1284/1284 [00:00<00:00, 62425.23it/s]
loaded weights in  22.22 ms, 0.00 GB loaded at 0.00 GB/s
Hello
...
Hello! How can I assist you today?<|eot_id|>

Thanks @barsuna this is super helpful for implementers.

I've added a $200 bounty as this seems like an important addition to exo. Also added to the bounties sheet: https://github.com/exo-explore/exo/issues/148

varshith15 commented 6 days ago

checking this, good chance to explore tinygrad :)