Closed mudler closed 5 months ago
Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!
_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.
Don't engage in conversation with me, I don't support (yet) replying!
I can help you with that! GPTQ is a powerful language model that can be used for various natural language processing tasks such as text classification, translation, and summarization. To add support for GPTQ to LocalAI, we need to first install the GPTQ package and then integrate it with the LLaMa backend. Here are the steps:
git clone https://github.com/go-skynet/LocalAI.git
models
subdirectory.curl -O https://raw.githubusercontent.com/qwopqwop200/GPTQ/master/models/gptq_base_uncased.h5
tokenizers
directory to the new GPTQ models directory.llama.cfg
file in the root directory:
[models]
gptq = "path/to/gptq_base_uncased.h5"
gptq_tokenizer = "path/to/gptq_tokenizer.py"
gptq_tokenizer_data = "path/to/gptq_tokenizer_data.txt"
Replace the path to the GPTQ model, tokenizer, and tokenizer data files with the actual paths on your system.
llama.cfg
file and run the following command to load the GPTQ model:
python -c "from llama import llama; llama.load('gptq')"
This should load the GPTQ model into the LLaMa backend, and you can now use it for various language processing tasks. I hope this helps! Let me know if you have any further questions or issues.
Sources:
According to the README.md of the project, it only support AutoGPTQ. We already have a AutoGPTQ backend but it still need to be test.
yes, this became less relevant as now the author focuses on AutoGPTQ. Closing
Tracker to add support to https://github.com/qwopqwop200/GPTQ-for-LLaMa