Open EntroSanity opened 1 month ago
I have actually been thinking that I've implemented the qkv loras a little incorrectly, which I do plan to fix, though requires a bit of an overhaul since I have to keep track of exactly which section of the fused qkv weight to apply unfused lora weights to. Though also, are you using the current version? There was a pretty significant improvement in lora implementation that I did recently (aka like two weeks ago or so).
I initially believed that the LoRA scale was limited to a range of 0 to 1, but I noticed that within this range, it didn't seem to affect the generated image. However, when I tried using a scale larger than 1, it successfully applied the LoRA effect to the image.
lora_data = {
"path": "path-to-my-lora",
"action": "load",
"scale": 1.5, # Adjust this value as needed,
"name": None # Include this even if it's None
}
I was testing the pipeline's capability to apply a Flux LoRA effect, but I couldn't achieve the desired effect in my generated images. I experimented with two different methods. Both methods successfully displayed a "LoRA successfully loaded" message; however, the generated images did not reflect the applied LoRA effect. Could I get some assistance in case I'm missing any crucial steps?
/lora
endpoint:Load the LoRA
lora_data = { "path": "path-to-my-lora", "action": "load", "scale": 1.0, # Adjust this value as needed, "name": None # Include this even if it's None }
response = requests.post("http://localhost:8088/lora", json=lora_data) print(response.json())