-
Since commit 13d6f8ed900b0857e9872e67befb02f7ed54da35 LORAs don't seem to work with the flux1-dev-Q8-0.gguf checkpoint.
Generating without a LORA seems fine, but adding one causes the error in the…
-
# Question about LoRA file identification and usage
## Context
I've successfully trained a LoRA model using this repository. However, I'm unsure about which file to use for inference and how to pr…
-
### Feature request
Is it possible to combine multiple LoRA adapters like you might do to combine multiple styles with Stable Diffusion?
### Motivation
I think we could get higher quality model out…
-
I tried fine-tuning the llama-2-7b model using LoRa on an RTX3090 with 24GB, where the memory usage was only about 17GB. However, when I used the same configuration on an A100 with 80GB, the memory us…
-
### 🚀 The feature, motivation and pitch
The speculative decoding framework allows the target model to have LoRAs, however the work to set up batch expansion has not yet been done. We can implement …
-
Hi.
Training LoRA for Flux.
This configuration works fine on 2 GPUs (RTX4090), but not with more GPUs (there are 8 Gpus in the machine).
It work on any pair of GPUs, but not with more than 2.
(run…
-
Everything was working swell earlier but now I'm having issues with my loras when using Forge but not Swarm. When I don't use any Loras the picture is clear. When I use even one the picture is fuzzy e…
-
I am experimenting with your main.py for Flux with LoRAs. Command line Windows, using your example command.
`python main.py --prompt "A cute corgi lives in a house made out of sushi, anime" --lora…
-
### Expected Behavior
Not 10Gb Vram eaten using the lora.
### Actual Behavior
I have flux fp8 schnell on a 3090, I run two loras rank 64 onto the model, but it uses all VRAM until it starts offload…
-
Hi, thanks for your awesome work. I am very interested in your work but facing problems when reproducing the gsm8k result.
I keep github code unchanged and run original shell script , and get:
```…