Open miasik opened 8 months ago
did you try to use MBW mode? Internally, Auto merger helper utilizes MBW weights. (simple alpha weight will not work currently)
Could you please give a screenshot with settings to try?
did you try to use MBW mode? Internally, Auto merger helper utilizes MBW weights. (simple alpha weight will not work currently)
Yes. i did. If it had resolved the issue I haven't reported here -) I'm sure that AutoHelper has to ignore the simple mode because using it is almost useless.
non MBW mode fixed now. thank you for your reporting
I'm not sure why, but same issue for me. I did mbw for 7 hours and it made 2200 images but it just repeated the same 10 pics sadge
I ran "A × (1 - α0) + B × α0 + dare_weights(diff C) × α1" with normal calcmode, use advanced mbw mode, and selected blocks "ALL" 0.5 with pattern search and simulated annealing optimizer
mbw mode with 7 hours? wow. I didn't do that much.
all possible optimizers are not tested yet. some optimizer won't work at all. other optimizer will work as expected. and image scoring method also important factor.
anyway, I guess, there are some bug exist to prevent it from working correctly.
I recommend you partial optimizing, for example:
default setting will try to vary all possible blocks and models -> it will consume to much time/GPU power without much visible improvement.
thank you for your reporting!
and 2-days ago, the following fix apply, https://github.com/wkpark/sd-webui-model-mixer/commit/d0710bd35a1fbc3beff9497ae21b06c25849e048
before this fix. merge process totally not work at all. please check your version before try to use auto merger
mbw mode with 7 hours? wow. I didn't do that much.
all possible optimizers are not tested yet. some optimizer won't work at all. other optimizer will work as expected. and image scoring method also important factor.
anyway, I guess, there are some bug exist to prevent it from working correctly.
I recommend you partial optimizing, for example:
- select specific model - select only "A" model to optimize.
- select specific blocks - select only "OUT00, OUT01, OUT03,..."
default setting will try to vary all possible blocks and models -> it will consume to much time/GPU power without much visible improvement.
thank you for your reporting!
Yeah I only tried bayesian and simulated annealing and pattern search so far. I also did a run with not specifying alpha values for any of the weights, in case it was locking them in somehow.
Another thing to note is that I usually get a result that says "best iteration: 0"
This was the 7 hour run: I set search time (min)
to 200, and I also set search iteration
to 200. I did it when I went to sleep
Results: 'hyper_score'
Best score: 0.5472393854351669
Best parameter set:
'model_b.BASE' : 0.58
'model_b.IN00' : 0.41
'model_b.IN01' : 0.42
'model_b.IN02' : 0.18
'model_b.IN03' : 0.54
'model_b.IN04' : 0.44
'model_b.IN05' : 0.1
'model_b.IN06' : 0.31
'model_b.IN07' : 0.23
'model_b.IN08' : 0.4
'model_b.M00' : 0.54
'model_b.OUT00' : 0.28
'model_b.OUT01' : 0.14
'model_b.OUT02' : 0.21
'model_b.OUT03' : 0.13
'model_b.OUT04' : 0.61
'model_b.OUT05' : 0.38
'model_b.OUT06' : 0.55
'model_b.OUT07' : 0.49
'model_b.OUT08' : 0.57
'model_c.BASE' : 0.53
'model_c.IN00' : 0.53
'model_c.IN01' : 0.23
'model_c.IN02' : 0.24
'model_c.IN03' : 0.21
'model_c.IN04' : 0.55
'model_c.IN05' : 0.58
'model_c.IN06' : 0.17
'model_c.IN07' : 0.3
'model_c.IN08' : 0.46
'model_c.M00' : 0.41
'model_c.OUT00' : 0.59
'model_c.OUT01' : 0.55
'model_c.OUT02' : 0.37
'model_c.OUT03' : 0.59
'model_c.OUT04' : 0.19
'model_c.OUT05' : 0.34
'model_c.OUT06' : 0.16
'model_c.OUT07' : 0.14
'model_c.OUT08' : 0.39
Best iteration: 0
Random seed: 876348280
Evaluation time : 26219.17984700203 sec [100.0 %]
Optimization time : 0.14667487144470215 sec [0.0 %]
Iteration time : 26219.326521873474 sec [120.27 sec/iter]
I'll try the last update, I forgot to pull it. Was on https://github.com/wkpark/sd-webui-model-mixer/commit/bb3808bc469dda0a94c08dabd455428442b7b7d3
Anyways, it's a great extension and I really like the auto merger implementation. It's still the least explored part of merging even though it has so much potential.
commit bb3808b was the regression bug. it prevents the merge from working properly. (I guess, that's why it fails to auto merge)
I've been trying to understand how AutoMergerHelpper works but noticed that it produces the same pictures with 0.5 merge ratio instead of variable rates. Looks like its default merging function overrides AMH and makes the default 0.5 merge after each iteration. if i noticed correctly it renders the first image from the folder with the current variable ratio and all other images with simple 0.5 ratio