Closed JamesKnight0001 closed 2 weeks ago
This is probably something to ask in discussions but here I found a colab from llm-course
There is that, but also the converter is designed to be run locally. I.e. if you can run the quantized model, you should also be able to quantize it.
I do not have the full requirements to quantize a 13b parameter model and don't want to waste credits in Vast.AI.