Closed TheOnlyHolyMoly closed 1 year ago
as per conversation in https://github.com/kohya-ss/sd-scripts/issues/397, i'm ok with @KohakuBlueleaf creating an additional cmd_opts flag (for example --lyco-patch-lora
). i can control that programatically as needed and that way there would be no need for per-repo specific work here.
If this were done it should have full feature parity too, e.g. LoRA hashes are now calculated and read. https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/39ec4f06ffb2c26e1298b2c5d80874dc3fd693ac
this feature would be optional to start with, so it doesn't have to have optional feature parity, only functional feature parity. but yes, it would be a nice-to-have. definitely not mandatory for an optional feature.
so basically, it is up to @KohakuBlueleaf to provide the enhanced version with the cmd_opts flag.
@vladmandic I have tried something on this feature
and it looks like if you want to get
yes, thats fine with me. in general, if one handler can handle everything there is no need for separate folders.
@vladmandic I have pushed a commit for this feature
Just tried it, it's not working at all, the lora is "loaded" and no errors are thrown but the image is the exact same, and the lora doesn't modify anything
first, its loading from lyco_dir
, not lora_dir
- i'm ok with that, just worth noting.
second, selecting lora from extra networks does some weird tag-to-filename matching, never looked into that, will dive in. anyhow, specifying exact filename in prompt manually works, selecting lora from extra networks is buggy
third - and this is a big one - results are veeeery different. i've tried some locons/lycos which do miss details when using lora interpreter (which is obvious since it doesn't know what to do with extra layers), but big issue is with some simple loras.
here's an example - above is using lyco and below is using lora interpreter:
as it is, the differences are just too big to be a possible replacement to built-in lora
@vladmandic ? It is loading from lora dir
Ih ok I know why I remove the extra ui But I cannot remove extra ui from built in lora so it will be a problem if I add it back
(there will be 2 Extra network ui for same dir, and both will generate
where its loading from and what extra network pages are shown is more cosmetic - worth fixing if results are similar to original. otherwise ppl will complain too much (and right now results are very different).
@vladmandic
I pushed another version, which also patch the extra network ui and I just make "exact same" image with it
without changing any compute part.
BTW If you are comparing between lycoris and old lora implementation, this kind of differences is quite normal
oh wait which locon/lora are you using for comparing?
for example, https://civitai.com/models/7102/blatap3 (i like to use that one in my tests as its visually easily identifiable and visible if its working or not)
@vladmandic I think I found the reason. A1111 use F.conv2d to rebuild the conv2d weight from 2matrix.... I can use exact same operator but it just ugly
There are the differences
(I already checked they will not give same result, due to numerical error)
I have few ideas how to handle that, but need to run now, let's catch up tomorrow.
@vladmandic You can check the lora dir part and the extra network ui
I think it could be worth thinking about having two methods available (like the legacy type with the "ugly" conv2d and the improved type with the better conv2d)....and then calling legacy from prompt by lora: and new form by nlora: for example? That way we'd keep the reproducability of images whilst still enabling better tech. I would appreciate the creative options from such switching..
@vladmandic @TheOnlyHolyMoly I do some more test on it......
I just find that I cannot reproduce "different output" I have gotten different output but it is caused by the order, not bcuz different implementation
@vladmandic Here is my test result both lora means both using lycoris lora + lyco means built-in lora + lycoris
different implementation didn't affect the result here actually
@vladmandic I also tried the model you post which also produce exact same image from built-in lora and lycoris
Interesting... reminds me of the discrepancy that I had in my other enviroments... maybe it's something in the settings that makes the difference here between same/different?
When I want to test this too, do I need to git-pull this manually from within lycoris extension folder? since the extension tab does not show me available updates
When I want to test this too, do I need to git-pull this manually from within lycoris extension folder? since the extension tab does not show me available updates
You can try that if you have same git commit in extension tab and in this repo's main branch I think you can just try it?
okay, so i ran a test lora: vs. lyco: with and without patch enabled.
So, according to my test results - doubled checked as I could not believe it in the end - lora and lyco deliver the same result with this lora file. However, when the lyco-patch is enabled in settings they both give a different result (again the same). The deviation is there but small, I made an overlay luminance-matching, so the grey areas are where the two images deviate. I have no practical explanation why Lycoris Extension would behave differently with switch on or off... @vladmandic @KohakuBlueleaf
For Reproduction >> I shared only the prompt containing the lora: call to make it easier for viewing
Prompt: RAW, best quality, masterpiece, novelAI, trending on 500px full body photo of a Australian
SD Lycoris Extension Commit 21e9ea0f // Sun Jun04 2023 13:49
Using VENV: E:\VLADautomatic\venv 22:53:15-139802 INFO Running extension preloading 22:53:15-353797 INFO Starting SD.Next 22:53:15-356798 INFO Python 3.10.6 on Windows 22:53:15-614830 INFO Version: 9726b4d2 Sun Jun 4 13:21:52 2023 -0400 22:53:16-384820 INFO Setting environment tuning 22:53:16-390799 INFO nVidia CUDA toolkit detected 22:53:29-326883 INFO Torch 2.0.0+cu118 22:53:29-416859 INFO Torch backend: nVidia CUDA 11.8 cuDNN 8700 22:53:29-421852 INFO Torch detected GPU: NVIDIA GeForce RTX 4090 VRAM 24563 Arch (8, 9) Cores 128 22:53:29-424852 INFO Verifying requirements 22:53:29-463852 INFO Installing packages 22:53:29-677851 INFO No changes detected: Quick launch active 22:53:29-679856 INFO Running extension preloading 22:53:29-681852 INFO Server arguments: []
Please forgive me for the checkpoint, I felt it had the best capacity to capture the essence of the girl for this purpose.
BTW What's your extension list like?
Extension Current version
LDSR
ScuNET
a1111-sd-webui-lycoris c1e676b4
clip-interrogator-ext 9e6bbd9b
multidiffusion-upscaler-for-automatic1111 70b3c5ea
prompt-bracket-checker
sd-dynamic-thresholding f02cacfc
sd-extension-aesthetic-scorer b60c3c82
sd-extension-steps-animation 13e5b455
sd-extension-system-info 064c856a
sd-webui-controlnet e78d486c
sd-webui-model-converter f6e0fa53
seed_travel 4bc8b2f1
stable-diffusion-webui-images-browser 59547c84
stable-diffusion-webui-rembg 657ae9f5
this may caused by some dtype things I'm checking
ok, i though how to simplify testing of this so i actually implemented a totally different feature - extra networks can now be selected in xyz grid :)
and i tested using ~20 loras (now its easy).
lora
and lyco
are nearly identical (within margin of error of undeterministic cross-optimization)lyco-as-lora
are pretty much always slightly different - big picture is the same, but details actually change.examples:
ignore differences in magmui
, that is expected as that is locon model so it doesn't get correctly interpreted in lora
but look for example at dreamshaper
- picture frame in the background moved
and those subtle differences are in every single example
and off-topic - is this necessary? :) one line warning, fine, but separators plus 4 lines?
=================================
Triggered lyco-patch-lora, will take lora_dir and <lora> format.
lyco_dir and <lyco> format is disabled
This patch may affect other lora extension
(if they don't support the lycoris extension or just use lora/lyco to determine which extension is working).
=================================
I'm pretty sure lot of ppl just ignore all the one line warning (
I can remove it if you want.
yes please :) and see #48 - error and warning log messages are much clearer to see so imo no need for separator lines in general.
@vladmandic I have found a tiny bug for sd2.x lyco models but still cannot reproduce and find any thing that can cause this kind of different on SD1.x models in my code.
@vladmandic if I read your summary right, you have confirmed my impression that the moment the "patch-mode" is turned on things start to deviate, right?
correct. both lora and lyco produce identical results on their own, but once lyco-patch-lora is active, there is a change in details. its clearly working, but there is a visible change.
If it were only lora interpretation changing upon patch-activation it would surprise me far less than also LYCO: changing its behavior of lora interpretation upon patch activation...could be be something about the lora-extension getting disabled?
what would happen - just for a testing approach - if we register the namespace "lora2:" instead of lora and leave the lora-extension activated. could we then better trace it down?
BTW, will you guys disable built-in lora in extension tab when test this?
you mean did we disable it for this test or will we disable it completely in the future? for this test, i left it as-is - i plan one more test tomorrow with built-in lora removed. for future, my goal is to disable built-in lora and rely on lyco as default.
@vladmandic yeah I found that built-in lora has something left even if I remove it from extranetwork dictionary For me this didn't affect anything but I don't know if this actually affect something or not
i've just spent 2h looking at this and i'm even more confured. and i've turned on deterministic mode so there is no variation due to other factors. here's a grid
differences are best seen in the vertical signs (larger blue sign and smaller b&w sign) to the left of the subject what is somewhat expected is that
what's confusing is:
lyco-patch-lora
and just lyco
any different if built-in lora is disabled? it shouldn't be
logically, any difference between those two must be up to lyco extension and implementaion of lyco-patch-lora fyi, geninfo is beautiful woman in a city wearing a flower dress, high detailed, wide shot <lora:dreamshaper:1.0> Negative prompt: easynegative, badprompts Steps: 30, Sampler: DPM++ 2M SDE, CFG scale: 6, Seed: 3910856728, Size: 512x512, Model hash: 7d3bdbad51, Model: a-zovya-photoreal-v10, Clip skip: 1, Version: b8f432a, Parser: Full parser
maybe lyco extension is getting hijacked as whole by SD when setting itself as primary lora controller ... that could explain why "lyco:" is acting differently upon patching. I think there is testing-value in registering the lora-lyco-patch with lora2: then we'd have "lora:" controlled by lora-legacy and "lyco:" and "lora2:" by lyco extension, I guess that way we could rule out that the patch mode as such has a flaw...?
maybe lyco extension is getting hijacked as whole by SD when setting itself as primary lora controller ... that could explain why "lyco:" is acting differently upon patching. I think there is testing-value in registering the lora-lyco-patch with lora2: then we'd have "lora:" controlled by lora-legacy and "lyco:" and "lora2:" by lyco extension, I guess that way we could rule out that the patch mode as such has a flaw...?
I don't know what are you talking Since all the "ExtraNetwork" class are stand alone with the SD model And only SD be hijacked by ExtraNetwork
maybe lyco extension is getting hijacked as whole by SD when setting itself as primary lora controller ... that could explain why "lyco:" is acting differently upon patching. I think there is testing-value in registering the lora-lyco-patch with lora2: then we'd have "lora:" controlled by lora-legacy and "lyco:" and "lora2:" by lyco extension, I guess that way we could rule out that the patch mode as such has a flaw...?
And What you talking is totally meaningless
Since the goal of this enhancement is: use
No matter you use lora2 or what, if you are not using
Basically, I'm trying to move lora to lora-legacy (or old lora) When before_ui in lycoris is triggered
And look if this can solve the problem
@vladmandic I think the problem here is When I use my locon model to test it give me totally identical result with: patched lora/lora/lyco So basically I have no way to check if my code can resolve anything since in my environment there is not thing undeterministic (I'm even using xformers, but there is not thing different between all images, not even 1pixel)
normal lora / normal lyco:
lyco patched lora (remove lora from extram_network_registry, didn't disable/remove it)
move lora to lora_legacy, register lyco as lora:
I just get totally identical result with all 5 different setups
I will push the final one to lyco-patch-lora branch, so you guys can test it.
BTW, I used the "difference" blending mode to test the imgs basically I just get totally black which means all these images have no different with each others And here is the model I use to test: https://civitai.com/models/14878/loconlora-yog-sothoth-depersonalization
Preface: Built-in lora extension is currently incapable of handling many new lora formats. Lycoris extension is capable of handling such formats very well.
Goal: Enabling a1111-sd-webui-lycoris extension to act as default lora and lycoris handler for webui Specs: See below (edited as per Vlad's comment below).