I have found a resolution and root cause for this issue, I am documenting the reproduction steps here to keep the PR more organized.
Minimum Reproduction Steps
Create at least 2 LoRA adapaters for a model 'initial Model'
On the Inference tab, select one of the LoRA's, 'Initial LoRA'
Switch the model to one of the other models 'Alternative Model'
Switch the model back to 'Initial Model'
Switch the LoRA to the 2nd lora that was created
Switch the LoRA back to 'Initial LoRA'
This error will be displayed: "Adapter lora/decapoda-research_llama-7b-hf_PYTHON-2 not found."
Callstack:
Traceback (most recent call last):
File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/gradio/routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/gradio/blocks.py", line 1108, in process_api
result = await self.call_function(
File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/gradio/blocks.py", line 915, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/gradio/helpers.py", line 588, in tracked_fn
response = fn(*args)
File "/mnt/c/Users/Jon/repos/simple-llm-finetuner/app.py", line 180, in load_lora
self.trainer.load_lora(f'{LORA_DIR}/{lora_name}')
File "/mnt/c/Users/Jon/repos/simple-llm-finetuner/trainer.py", line 68, in load_lora
self.model.set_adapter(lora_name)
File "/home/jon/miniconda3/envs/simple-llm-finetuner/lib/python3.10/site-packages/peft/peft_model.py", line 404, in set_adapter
raise ValueError(f"Adapter {adapter_name} not found.")
ValueError: Adapter lora/decapoda-research_llama-7b-hf_PYTHON-2 not found.
I have found a resolution and root cause for this issue, I am documenting the reproduction steps here to keep the PR more organized.
Minimum Reproduction Steps
This error will be displayed: "Adapter lora/decapoda-research_llama-7b-hf_PYTHON-2 not found."
Callstack: