qwopqwop200 / GPTQ-for-LLaMa

4 bits quantization of LLaMA using GPTQ
Apache License 2.0
2.98k stars 457 forks source link

SqueezeLLM support? #264

Open nikshepsvn opened 1 year ago

nikshepsvn commented 1 year ago

https://github.com/SqueezeAILab/SqueezeLLM will GPTQ-for-LLaMa support this mode?