-
Dear Repository Maintainers,
I hope this message finds you well. I am writing to express my appreciation for your work on FU-LoRA, as it presents a significant contribution to the community. Howeve…
-
![image](https://github.com/user-attachments/assets/99f43ccd-2d6a-40bf-8717-918d55bb9043)
After the update, the Lora patching process is completed halfway through
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
a merger that merge FLUX models and loras…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch…
-
I am interested in utilizing your work for a project. So, the actual use include training to a custom dataset and using the model to infer masks afterwards. I couldn't find any direct information rega…
-
Problem:
- When loading a model that does not have add generation prompt in the chat template, it causes a runtime error rather than just a warning. This means - even if one does not want to have a g…
-
First of all, many thanks for doing this! This is the only repo I'm aware of which allows doing Flux Lora training on a 16GB GPU.
I appreciate this is new and the lack of information is unavoidable. …
-
### Actual Behavior
Using loras with Flux is very slow.
This is independent from the lora size.
But performance is good, using loras, if one of these conditions is met:
- XLabs loras are u…
-
-
Not sure when this happened, but it seems like Forge is unable to tell whether my LORAs are SD1 or SDXL, and filter them appropriately on the Lora tab.
If I go to Edit Metadata on a LORA in the web…