IST-DASLab / marlin

FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
Apache License 2.0
573 stars 45 forks source link

Marlin vs gguf #36

Open blap opened 2 weeks ago

blap commented 2 weeks ago

Is there an easy way to convert gguf to marlin and vice-versa? Any comparisons? https://github.com/leafspark/AutoGGUF

blap commented 2 weeks ago

here you can see details about it and how to convert hf to gguf from llama.cpp: https://github.com/ggerganov/llama.cpp/tree/master/gguf-py