When I launch the server and specify a Lora to load on startup, it doesn't get loaded. Previously, I would see a message in the console saying that the lora is loaded when I run using:
But now, I don't see it and my model behaves as if there's no lora loaded.
Even worse, when I go into the Model settings to manually load the Lora, the app reports that the lora is loaded successfully but still behaves as if the lora isn't loaded.
Is there an existing issue for this?
[X] I have searched the existing issues
Reproduction
Here is a simple Colab notebook for a project I'm currently working on. The latest version of text-generation-webui doesn't load the loras anymore on startup and cannot load any even after I force it to by going into the Model settings:
Describe the bug
When I launch the server and specify a Lora to load on startup, it doesn't get loaded. Previously, I would see a message in the console saying that the lora is loaded when I run using:
!python server.py --share --load-in-8bit --lora-dir loras --lora $dirname
But now, I don't see it and my model behaves as if there's no lora loaded.
Even worse, when I go into the Model settings to manually load the Lora, the app reports that the lora is loaded successfully but still behaves as if the lora isn't loaded.
Is there an existing issue for this?
Reproduction
Here is a simple Colab notebook for a project I'm currently working on. The latest version of text-generation-webui doesn't load the loras anymore on startup and cannot load any even after I force it to by going into the Model settings:
https://github.com/ragnarkar/reddit_imitator/blob/main/Reddit_imitator.ipynb
The following code doesn't work:
But the following code works fine because I ask it to git checkout a previous commit that was working:
When you run the 2nd code block in Colab, notice an extra line in the output shortly before your Gradio link:
Applying the following LoRAs to facebook_opt-2.7b: ragnarkar_subreddit_chatgpt_opt-2.7b_lora
Screenshot
No response
Logs
System Info