lxe / simple-llm-finetuner

Simple UI for LLM Model Finetuning
MIT License
2.05k stars 133 forks source link

Attempting to use 13B in the simple tuner - #28

Open Atlas3DSS opened 1 year ago

Atlas3DSS commented 1 year ago

updated the main.py with decapoda-research/llama-13b-hf in all the spots that had 7B It downloaded the sharded parts all right but now im getting this config issue tho. Any advice would be appreciated.

File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, args) File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/gradio/helpers.py", line 587, in tracked_fn response = fn(args) File "/home/orwell/simple-llama-finetuner/main.py", line 82, in generate_text load_peft_model(peft_model) File "/home/orwell/simple-llama-finetuner/main.py", line 35, in load_peft_model model = peft.PeftModel.from_pretrained( File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/peft/peft_model.py", line 135, in from_pretrained config = PEFT_TYPE_TO_CONFIG_MAPPING[PeftConfig.from_pretrained(model_id).peft_type].from_pretrained(model_id) File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/peft/utils/config.py", line 101, in from_pretrained raise ValueError(f"Can't find config.json at '{pretrained_model_name_or_path}'") ValueError: Can't find config.json at ''

image The config file appears in the cache the same as it does for 7B - im assuming im missing something just not sure what.

Thank you again

mouchourider commented 1 year ago

You need to change the requirement.txt file to this:

datasets loralib sentencepiece git+https://github.com/zphang/transformers@c3dc391 accelerate bitsandbytes git+https://github.com/huggingface/peft.git gradio

And you need to change those three functions as follow in the main.py file:

def load_base_model(): global model print('Loading base model...') model = transformers.LLaMAForCausalLM.from_pretrained( 'decapoda-research/llama-13b-hf', load_in_8bit=True, torch_dtype=torch.float16, device_map={'':'cuda'} )

def load_tokenizer(): global tokenizer print('Loading tokenizer...') tokenizer = transformers.LLaMATokenizer.from_pretrained( 'decapoda-research/llama-13b-hf', )

def load_peft_model(model_name): global model print('Loading peft model ' + model_name + '...') model = peft.PeftModel.from_pretrained( model, model_name, torch_dtype=torch.float16, device_map={'':0} )

It should work.

Atlas3DSS commented 1 year ago

image We are working - altho i must say as it turns out it was a UI issue i think In the interference tab image The lora model must have NOTHING in the field, not "none", which is what i had on mine, else it cannot find the config file. Turns out that was the issue. I clicked hte lil X and then it started working. No idea why on that one. I think it might be because up to this point i only have 7B LoRa so when that lil thing says none it is looking in 7B folder not 13B?

In either case it is now working - i thank you for your help.