Open d4rkc0de opened 1 year ago
change the model to an HF model detailed in run_localGPT.py main()
comment out model_id = "TheBloke/WizardLM-7B-uncensored-GPTQ" model_basename = "WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors" llm = load_model(device_type, model_id=model_id, model_basename = model_basename)
uncomment model_id = "TheBloke/Wizard-Vicuna-7B-Uncensored-HF" llm = load_model(device_type, model_id=model_id)
System: Ubuntu 20.04 CPU: 12th Gen Intel i7-1260P (16) GPU: Intel Device 46a6
I got this error when running this command:
python run_localGPT.py --device_type cpu
I think this error is due to the new commits, I tried changing device="cpu" in model = AutoGPTQForCausalLM.from_quantized(...)
But I got this error now