lxe / simple-llm-finetuner

Simple UI for LLM Model Finetuning
MIT License
2.05k stars 132 forks source link

Error: Adapter lora/decapoda-research_llama-{ADAPTER_NAME} not found. #44

Closed 64-bit closed 1 year ago

64-bit commented 1 year ago

I have found a resolution and root cause for this issue, I am documenting the reproduction steps here to keep the PR more organized.

Minimum Reproduction Steps

  1. Create at least 2 LoRA adapaters for a model 'initial Model'
  2. On the Inference tab, select one of the LoRA's, 'Initial LoRA'
  3. Switch the model to one of the other models 'Alternative Model'
  4. Switch the model back to 'Initial Model'
  5. Switch the LoRA to the 2nd lora that was created
  6. Switch the LoRA back to 'Initial LoRA'

This error will be displayed: "Adapter lora/decapoda-research_llama-7b-hf_PYTHON-2 not found." image

Callstack:

Traceback (most recent call last):
  File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/gradio/routes.py", line 393, in run_predict
    output = await app.get_blocks().process_api(
  File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/gradio/blocks.py", line 1108, in process_api
    result = await self.call_function(
  File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/gradio/blocks.py", line 915, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/gradio/helpers.py", line 588, in tracked_fn
    response = fn(*args)
  File "/mnt/c/Users/Jon/repos/simple-llm-finetuner/app.py", line 180, in load_lora
    self.trainer.load_lora(f'{LORA_DIR}/{lora_name}')
  File "/mnt/c/Users/Jon/repos/simple-llm-finetuner/trainer.py", line 68, in load_lora
    self.model.set_adapter(lora_name)
  File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/peft/peft_model.py", line 404, in set_adapter
    raise ValueError(f"Adapter {adapter_name} not found.")
ValueError: Adapter lora/decapoda-research_llama-7b-hf_PYTHON-2 not found.