PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
19.58k stars 2.19k forks source link

Mistral not supported #778

Open testercell opened 3 months ago

testercell commented 3 months ago

I'm trying to use the following as the model id and base name MODEL_ID = "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ" MODEL_BASENAME = "wizardLM-7B-GPTQ-4bit.compat.no-act-order.safetensors"

But when runing run_localgpt.py i get the following error \miniconda3\Lib\site-packages\auto_gptq\modeling_utils.py", line 147, in check_and_get_model_type raise TypeError(f"{config.model_type} isn't supported yet.") TypeError: mistral isn't supported yet.

Any help is super appreciated!!

FinlandBreakfast commented 3 months ago

What is your OS? I set the following

MODEL_ID = "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ"
MODEL_BASENAME = "model.safetensors"

and got this

logging.INFO("GPTQ models will NOT work on Mac devices. Please choose a different model.")
TypeError: 'int' object is not callable
Bhavya031 commented 2 months ago

if you are using cuda use GPTQ model or you are on mac use GGUF https://youtu.be/ASpageg8nPw?t=74