Open pondloso opened 1 week ago
The adaptive merge focuses on dynamically combining models, adjusting weights automatically based on the characteristics of the LoRAs being merged. While the manual weight you specify is still considered, its impact might not always be as noticeable as you’d expect.
This is particularly true when there’s little conflict between the models or when the tensor properties naturally guide the merging process. So, you might see that changing the weight percentage doesn’t significantly alter the output image. The goal of adaptive merging is to find the best blend on the fly, which can sometimes make manual weight adjustments less obvious.
If you prefer having direct control over weight percentages without the adaptive adjustments, I’ve added a standard merge option. (Just now, you can git pull the update) This lets you manually adjust the weights and get a predictable effect on the merged output. If adaptive merging doesn’t quite meet your needs, switching to the standard merge will give you more control over how weights are applied.
Thank you for your update. But i test new standard merge option it still same output like adjusting weights.
The merging process is not just about creating a blend like mixing colors. Breaking it down into key elements can help better understand what I try to do with this AI Toolkit. Here’s what happens during merging and why it can offer more than simply using the models independently with live weights during generation.
When you merge two LoRA models, you are essentially combining their internal components, mainly the tensors that hold the learned patterns of each model.
Tensors (Layers): Each tensor is a multidimensional array of numbers that stores the learned weights of the LoRA model. They represent the "knowledge" the model has learned from its training data, such as textures, colors, styles, and structural features.
Weights of Layers: In each LoRA model, layers have weights that determine how much influence that layer has on the overall output. These weights are what make each model unique in how it applies its patterns during generation.
When merging, these weights are combined, adjusted, or balanced between the two models. That’s where the merging strategy comes in:
Adaptive Merge: Combines weights based on the data inside each tensor, allowing for a dynamic and context-aware blend.
Fixed Weights Merge: Uses manually set percentages to control how much of Model A and Model B influences the final merged model.
Unified Model: Merging combines the models into a single LoRA file, meaning all the learned patterns, weights, and features are baked into one set of tensors. This is more than just a mix; it’s about creating a unified entity where the features of each model are blended at a foundational level.
Integrated Features: I want the features of each model to interact and enhance each other in ways that don’t happen when using them separately. It’s like merging the DNA of both models to create a new organism that can express traits from both parents in a deeply integrated way.
When I use two LoRAs independently during image generation, I’m essentially telling the generation engine to apply each model’s influence at specific strengths (e.g., 60% from Model A, 40% from Model B). This works like applying filters on top of each other, with each adding its touch independently.
Merging fuses the models at the parameter level, meaning the patterns, textures, and styles are combined deeply. It’s like taking the DNA of both models and creating a new, integrated layer that expresses the traits of both models in a way that live weighting can’t replicate.
Example of Interaction:
Live Interaction: If Model A adds texture and Model B adds color, using them live means they affect the image independently.
Merged Integration: Merging these models allows the texture and color to blend more cohesively, creating effects where the texture influences how the color is applied, resulting in a unique look that feels more cohesive.
Similar or Redundant Features: If both LoRA models have similar learned features or overlapping patterns, merging them doesn’t create much new synergy. In some cases, merging behaves like a weighted average, which is what happens when using live weights to combine models during generation.
Linear Combinations: Basic merging strategies can sometimes act like straightforward blends. If the merged model doesn’t deeply integrate features, the output might be nearly identical to what I get by using the LoRAs independently with similar weights.
Minimal Feature Interaction: If the models’ effects are independent (e.g., one adjusts texture, the other adjusts lighting), merging and live weighting often produce similar outcomes because the features don’t deeply influence each other at the core level.
Deep Feature Fusion: Merging allows layers to interact at a fundamental level, creating new, cohesive effects. For example, merging can produce a detailed, realistic portrait overlaid with soft watercolor brush strokes—a hybrid effect that can’t be achieved by simply adjusting live weights.
More than Stacking: While live weighting is like stacking ingredients on a burger, merging transforms those ingredients into a new, unified layer—like blending them into a sauce, creating something more cohesive and uniquely expressive.
I hope that's help! :)
I am about to publish some very very experimental an even more advanced merge strategy this weekend along wiht example of data output grid.
@Anashel-RPG . Thanks for the explanations.
I don't know why but Weight Percentage not had any different in image out put for me. But my lora merge together nicely .But i want to adjust more with Weight Percentage.