Open Luke2642 opened 1 year ago
I love the idea!
I actually already shamelessly stole the ModelMerger from A1111 - I use the add-difference
interpolation method to create Inpainting checkpoints at runtime as needed, but I implemented the weighted-sum
method as well, which in theory does as you describe (though without the ability to specify a per-block weight yet, but there's no reason it can't be modified to.) I didn't get around to making any kind of user interface around it before alpha, but this seems like a very doable beta feature.
Excellent, glad you like it!
A similar idea, it's easy to generate an image, load a new model, wait ~6 seconds, img2img it to change style, then switch back, rinse and repeat. But if you can hold two models in memory and "generate an image using X model for the first N steps, then switch to Y model for M steps" workflow, I think many people would love it! Mixture of experts, but for SD models?
If you were to also have merge block weighting with add difference, you'd need to have 3 models in memory? RAM > VRAM but eek!
Amazing app, and a great interface, really enjoying it!
I'm sure everyone has a few favourite extensions that they miss from other stable diffusion webui's... for me it is block merging models in real time in memory:
https://github.com/ashen-sensored/sd-webui-runtime-block-merge
If you're not familiar with merge block weights, this is what it was built upon (and included screenshots):
https://github.com/bbc-mc/sdweb-merge-block-weighted-gui
Using the runtime version makes the workflow of using two models super fast, just move a slider and click generate and you get instant results, no need to save and load a model each iteration. The most popular photo realism models seem to be converging, losing the incredible diversity of other models. I just wanted to highlight this extension, and ask if you think it's compatible with your vision!