Open Natans8 opened 3 months ago
Regional Prompter (Attention mode) works in reforge still.
On my new install of the regional prompter the ui fails to load the module ldm.modules at all. And never gets added to the gui.. On the other side of things, I checked and I don't have the LDM python module installed: I tried to install it, but I'm wondering if their's a version requirements mix up between forge and Regional prompter.
Collecting ldm Using cached ldm-0.1.3.tar.gz (6.1 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'error' error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
Traceback (most recent call last):
File "
note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed
× Encountered error while generating package metadata. ╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Error: StabilityMatrix.Core.Exceptions.ProcessException: Process python failed with exit-code 1.
at StabilityMatrix.Core.Processes.ProcessRunner.WaitForExitConditionAsync(Process process, Int32 expectedExitCode, CancellationToken cancelToken)
at StabilityMatrix.Core.Models.PackageModification.PipStep.ExecuteAsync(IProgress1 progress) at StabilityMatrix.Core.Models.PackageModification.PipStep.ExecuteAsync(IProgress
1 progress)
at StabilityMatrix.Core.Models.PackageModification.PackageModificationRunner.ExecuteSteps(IReadOnlyList`1 steps)
Can confirm, I get the same error
File "E:\AI Art Stuff\forge\webui\extensions\sd-webui-regional-prompter\scripts\attention.py", line 3, in
数日前のアップデートで他の拡張機能も 同じ種類のエラーが出ています gradio 4 forgeに対応予定でしょうか? マスクモードは特に気に入っていました
yeah also broken for me after updating forge, hope it works soon
try to copy the 'ldm' folder in "..\stable-diffusion-webui-forge\repositories\stable-diffusion-stability-ai" to the root "..\stable-diffusion-webui-forge" could solve this problem. I use this method to run regional prompt on the latest Stable Diffusion Forge successfully.
try to copy the 'ldm' folder in "..\stable-diffusion-webui-forge\repositories\stable-diffusion-stability-ai" to the root "..\stable-diffusion-webui-forge" could solve this problem. I use this method to run regional prompt on the latest Stable Diffusion Forge successfully.
but when using XL model, this error has happened: ** Error running postprocess: F:\AIDraw\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\rp.py Traceback (most recent call last): File "F:\AIDraw\stable-diffusion-webui-forge\modules\scripts.py", line 900, in postprocess script.postprocess(p, processed, script_args) File "F:\AIDraw\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\rp.py", line 600, in postprocess unloader(self, p) File "F:\AIDraw\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\rp.py", line 621, in unloader unloadlorafowards(p) File "F:\AIDraw\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\latent.py", line 573, in unloadlorafowards emb_db = sd_hijack.model_hijack.embedding_db AttributeError: 'StableDiffusionModelHijack' object has no attribute 'embedding_db'
1,1 0.2 Horizontal ** Error running process: F:\AIDraw\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\rp.py Traceback (most recent call last): File "F:\AIDraw\stable-diffusion-webui-forge\modules\scripts.py", line 844, in process script.process(p, script_args) File "F:\AIDraw\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\rp.py", line 502, in process self.handle = hook_forwards(self, p.sd_model.model.diffusion_model) AttributeError: 'StableDiffusionXL' object has no attribute 'model'
Just use the older version of ForgeUI, it works
This is not the correct way to fix it, as it will propably break non-forge setups (I didn't test that), but it is the way I found to get it working again.
diff --git a/scripts/latent.py b/scripts/latent.py
index 7d0dcc3..de60abb 100644
--- a/scripts/latent.py
+++ b/scripts/latent.py
@@ -570,7 +570,8 @@ def unloadlorafowards(p):
except:
pass
- emb_db = sd_hijack.model_hijack.embedding_db
+ from modules import ui_extra_networks_textual_inversion
+ emb_db = ui_extra_networks_textual_inversion.embedding_db
import lora
for net in lora.loaded_loras:
if hasattr(net,"bundle_embeddings"):
diff --git a/scripts/rp.py b/scripts/rp.py
index 93b4466..34272f2 100644
--- a/scripts/rp.py
+++ b/scripts/rp.py
@@ -499,14 +499,14 @@ class Script(modules.scripts.Script):
##### calcmode
if "Att" in calcmode:
- self.handle = hook_forwards(self, p.sd_model.model.diffusion_model)
+ self.handle = hook_forwards(self, p.sd_model.forge_objects.unet.model.diffusion_model)
if hasattr(shared.opts,"batch_cond_uncond"):
shared.opts.batch_cond_uncond = orig_batch_cond_uncond
else:
shared.batch_cond_uncond = orig_batch_cond_uncond
unloadlorafowards(p)
else:
- self.handle = hook_forwards(self, p.sd_model.model.diffusion_model,remove = True)
+ self.handle = hook_forwards(self, p.sd_model.forge_objects.unet.model.diffusion_model,remove = True)
setuploras(self)
# SBM It is vital to use local activation because callback registration is permanent,
# and there are multiple script instances (txt2img / img2img).
@@ -514,7 +514,7 @@ class Script(modules.scripts.Script):
elif "Pro" in self.mode: #Prompt mode use both calcmode
self.ex = "Ex" in self.mode
if not usebase : bratios = "0"
- self.handle = hook_forwards(self, p.sd_model.model.diffusion_model)
+ self.handle = hook_forwards(self, p.sd_model.forge_objects.unet.model.diffusion_model)
denoiserdealer(self)
neighbor(self,p) #detect other extention
@@ -608,7 +608,7 @@ class Script(modules.scripts.Script):
def unloader(self,p):
if hasattr(self,"handle"):
#print("unloaded")
- hook_forwards(self, p.sd_model.model.diffusion_model, remove=True)
+ hook_forwards(self, p.sd_model.forge_objects.unet.model.diffusion_model, remove=True)
del self.handle
self.__init__()
@@ -711,7 +711,7 @@ def tokendealer(self, p):
padd = 0
- tokenizer = shared.sd_model.conditioner.embedders[0].tokenize_line if self.isxl else shared.sd_model.cond_stage_model.tokenize_line
+ tokenizer = shared.sd_model.conditioner.embedders[0].tokenize_line if self.isxl else shared.sd_model.text_processing_engine_l.tokenize_line
for pp in ppl:
tokens, tokensnum = tokenizer(pp)
This is not the correct way to fix it, as it will propably break non-forge setups (I didn't test that), but it is the way I found to get it working again.
This is awesome! can you please explain what I do with this? as a non code savvy person?
Thank you <3
@vitrvvivs neither git apply fix.diff
nor patch -i fix.diff
work for me. Both say the patch does not apply.
Edit: I should also say I made the changes manually, and it still didn't work. Forge gave similar error messages.
@BrianAllred shoot, I think that's whitespace errors. The repo uses CRLF and has trailing spaces all over the place. While your editor probably uses LF, and github stripped all the trailing spaces from the comments. Both of which cause the patch to fail to match.
git config core.whitespace cr-at-eol
git apply --whitespace=fix fix.diff
I should mention I also used @chen079 's suggestion
stable-diffusion-webui$ cp -r repositories/stable-diffusion-stability-ai/ldm ./
I'm absolutely disgusted with the Forge update, which has tons of issues, including breaking Regional Prompter. Is there a temporary alternative or a similar extension?
Going back to A1111 and its slowness unfortunately seems like the only solution...
I'm absolutely disgusted with the Forge update, which has tons of issues, including breaking Regional Prompter. Is there a temporary alternative or a similar extension?
Going back to A1111 and its slowness unfortunately seems like the only solution...
You could just use an older version of Forge, iirc it's explained in the README.
I'm absolutely disgusted with the Forge update, which has tons of issues, including breaking Regional Prompter. Is there a temporary alternative or a similar extension? Going back to A1111 and its slowness unfortunately seems like the only solution...
You could just use an older version of Forge, iirc it's explained in the README.
I'm absolutely disgusted with the Forge update, which has tons of issues, including breaking Regional Prompter. Is there a temporary alternative or a similar extension?
Going back to A1111 and its slowness unfortunately seems like the only solution...
I heard https://github.com/Panchovix/stable-diffusion-webui-reForge is a good solution for quickly swapping versions.
Thank you for your answers! I don't know why I didn't think of it, but it now seems obvious to use the previous July version. I'm relieved, I’ll be able to resume my work.
@BrianAllred shoot, I think that's whitespace errors. The repo uses CRLF and has trailing spaces all over the place. While your editor probably uses LF, and github stripped all the trailing spaces from the comments. Both of which cause the patch to fail to match.
git config core.whitespace cr-at-eol git apply --whitespace=fix fix.diff
I should mention I also used @chen079 's suggestion
stable-diffusion-webui$ cp -r repositories/stable-diffusion-stability-ai/ldm ./
(edited) funny thing. the prompting is all weird now, but this one works: '1girl in garden, cowboy shot ADDCOMM green hair twintail ADDROW BREAK blue blouse ADDROW BREAK red skirt'
but I cant switch it over to column mode and if I remove the breaks, everything falls apart.
The problem is that ldm is not available in Forge when you install it. I solved it with installing ldm manually in the forge environment:
https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1407#issuecomment-2336399504
Edit: Forge has altered a class StableDiffusionModelHijack
which in A1111 has an embedding_db
attribute but that does not exist in Forge.
regional-prompter uses this to loop through the Loras and Forge has completely disabled the StableDiffusionModelHijack
class, boldly replacing all its methods with pass
. So it seems there is currently no chance to get it running at all.
Got the same error, just commenting to promote the solution of this, will be awesome to have the possibility to use it with flux but we'll see.
Couldn't make any of the solutions mentioned before to work (didn't try to go back the version, but I honestly prefer having flux than regional prompter for now as I can use it in A1111 I guess)
`*** Error loading script: attention.py
Traceback (most recent call last):
File "D:\Users\USER\Documents\AI\webui_forge_cu121_torch231\webui\modules\scripts.py", line 525, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\Users\USER\Documents\AI\webui_forge_cu121_torch231\webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "
*** Error loading script: latent.py
Traceback (most recent call last):
File "D:\Users\USER\Documents\AI\webui_forge_cu121_torch231\webui\modules\scripts.py", line 525, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\Users\USER\Documents\AI\webui_forge_cu121_torch231\webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "
*** Error loading script: rp.py
Traceback (most recent call last):
File "D:\Users\USER\Documents\AI\webui_forge_cu121_torch231\webui\modules\scripts.py", line 525, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\Users\USER\Documents\AI\webui_forge_cu121_torch231\webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "
I can confirm it works (and installing it lets the main regional prompter show up though not work), but only with SDXL (and presumably SD1.5). I couldn't get it to do any regional functionality with a Flux model, unfortunately.
Can also confirm this is working on Forrge with PonyXL models / LORAs. Nice work,
Does the masking feature work too? I'm still getting an error
No it doesn't work. It has an error (pt) as written elsewhere. The fork is useless with the latest forge - which is a shame.
No it doesn't work. It has an error (pt) as written elsewhere. The fork is useless with the latest forge - which is a shame.
Is this recent? BC I got it working a while back, but haven't tried it since a few days.
Checked and it works. It randomly breaks for me after messing with prompting too much, but overall at least column mode with common positive and negative works.
In the recent WebUI Forge updates, Regional Prompted is unable to even load. since there were many fundamental changes to both backend and frontend, including the switch to Gradio 4. Currently, Regional Prompter is the last plugin that holds me back from switching completely to Forge, since Forge seems to be overall better optimized, and supports features like Flux that A1111 does not.
It would be great to adapt, if not to Flux, than at least to keep the existing SD1.5/SDXL capabilities in Forge. Thank you in advance!
Below in the log of WebUI Forge when trying to start with Regional Prompter.