Anashel-RPG / anashel-utils

Set of Utilities I Have Coded to Help Me Train RPGv6 on Flux1
MIT License
34 stars 2 forks source link

Weight Percentage hd no effect. #1

Open pondloso opened 1 week ago

pondloso commented 1 week ago

I don't know why but Weight Percentage not had any different in image out put for me. But my lora merge together nicely .But i want to adjust more with Weight Percentage.

Anashel-RPG commented 1 week ago

The adaptive merge focuses on dynamically combining models, adjusting weights automatically based on the characteristics of the LoRAs being merged. While the manual weight you specify is still considered, its impact might not always be as noticeable as you’d expect.

This is particularly true when there’s little conflict between the models or when the tensor properties naturally guide the merging process. So, you might see that changing the weight percentage doesn’t significantly alter the output image. The goal of adaptive merging is to find the best blend on the fly, which can sometimes make manual weight adjustments less obvious.

If you prefer having direct control over weight percentages without the adaptive adjustments, I’ve added a standard merge option. (Just now, you can git pull the update) This lets you manually adjust the weights and get a predictable effect on the merged output. If adaptive merging doesn’t quite meet your needs, switching to the standard merge will give you more control over how weights are applied.

pondloso commented 1 week ago

Thank you for your update. But i test new standard merge option it still same output like adjusting weights.

Anashel-RPG commented 1 week ago

The merging process is not just about creating a blend like mixing colors. Breaking it down into key elements can help better understand what I try to do with this AI Toolkit. Here’s what happens during merging and why it can offer more than simply using the models independently with live weights during generation.

1. Elements Involved in Merging LoRA Models

When you merge two LoRA models, you are essentially combining their internal components, mainly the tensors that hold the learned patterns of each model.

When merging, these weights are combined, adjusted, or balanced between the two models. That’s where the merging strategy comes in:

2. What I Aim to Achieve with Merging

3. Why Merging Is Different from Using Live Weights

When I use two LoRAs independently during image generation, I’m essentially telling the generation engine to apply each model’s influence at specific strengths (e.g., 60% from Model A, 40% from Model B). This works like applying filters on top of each other, with each adding its touch independently.

Merging fuses the models at the parameter level, meaning the patterns, textures, and styles are combined deeply. It’s like taking the DNA of both models and creating a new, integrated layer that expresses the traits of both models in a way that live weighting can’t replicate.

Example of Interaction:

4. When Merging and Live Weighting Yield Similar Results

5. Unique Benefits of Merging

I hope that's help! :)

I am about to publish some very very experimental an even more advanced merge strategy this weekend along wiht example of data output grid.

AfterHAL commented 1 hour ago

@Anashel-RPG . Thanks for the explanations.