ViperX7 / Alpaca-Turbo

Web UI to run alpaca model locally
GNU Affero General Public License v3.0
877 stars 92 forks source link

Error Loading model #63

Open tkalevra opened 1 year ago

tkalevra commented 1 year ago

CONTEXT: I'm running in zorinOS(an ubuntu spinoff but what isn't these days..) installation was successful and the web ui is responsive on 127.1:7887

I've downloaded: https://huggingface.co/lmsys/vicuna-13b-delta-v1.1/blob/main/pytorch_model-00001-of-00003.bin and copied the .bin to the appropriate folder, which shows in the gui under "Load model"

When I click submit I receive an error in terminal:

alpaca_1  | [
alpaca_1  |     '/main',
alpaca_1  |     '-i',
alpaca_1  |     '--seed',
alpaca_1  |     '888777',
alpaca_1  |     '-ins',
alpaca_1  |     '-t',
alpaca_1  |     '4',
alpaca_1  |     '-b',
alpaca_1  |     '256',
alpaca_1  |     '--top_k',
alpaca_1  |     '200',
alpaca_1  |     '--top_p',
alpaca_1  |     '0.99',
alpaca_1  |     '--repeat_last_n',
alpaca_1  |     '512',
alpaca_1  |     '--repeat_penalty',
alpaca_1  |     '1',
alpaca_1  |     '--temp',
alpaca_1  |     '0.7',
alpaca_1  |     '--n_predict',
alpaca_1  |     '1000',
alpaca_1  |     '-m',
alpaca_1  |     'models/pytorch_model-00001-of-00003.bin',
alpaca_1  |     '--interactive-first'
alpaca_1  | ]
alpaca_1  | ERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoRERRoR

any feedback or insights would be greatly appreciated please, or an alternate model for me to attempt please

ViperX7 commented 1 year ago

i think you might be running incorrect/unsupported model format

Try running the vicuna model that is quantized to run with llama.cpp one that works is

try this https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g-GGML

espressoelf commented 1 year ago

I don't know if it generally works, but you downloaded only part 1 of 3 and also ommited the configs. The model card of that model also states

NOTE: This "delta model" cannot be used directly. Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See https://github.com/lm-sys/FastChat#vicuna-weights for instructions.

I'm using Pi3141/gpt4-x-alpaca-native-13B-ggml.

ViperX7 commented 1 year ago

hey @espressoelf i see you are helping a lot of people just wanted to say thanks