Closed lllyasviel closed 1 year ago
not sure,but perhaps you can try
I need help :( I just tried using Shuffle and got this error:
"ValueError: not enough values to unpack (expected 2, got 1)"
update and restart everything completely including your terminal
update and restart everything completely including your terminal
You mean update A1111 as well?
make sure you see this, and then close your terminal and start again
Mine says '33608671' and I just updated it earlier
Ah, it's the one you shared now :)
Goddammit, I'm still getting that error :(
"ValueError: not enough values to unpack (expected 2, got 1)"
please share the full log
please share the full log
Error completing request
Arguments: ('task(pyzzk7tkwj763u1)', 'A highly detailed portrait of guanyu, masterpiece, absurdres, highres, featured on ArtStation', 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality', [], 20, 0, False, False, 1, 1, 7.5, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.external_code.ControlNetUnit object at 0x00000169014997B0>, <scripts.external_code.ControlNetUnit object at 0x0000016901275810>, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, True, -1.0, True, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', True, False, False, 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, 50, '{inspiration}', None) {}
Traceback (most recent call last):
File "C:\ai\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, kwargs))
File "C:\ai\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, *kwargs)
File "C:\ai\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "C:\ai\stable-diffusion-webui\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "C:\ai\stable-diffusion-webui\modules\processing.py", line 635, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "C:\ai\stable-diffusion-webui\modules\processing.py", line 835, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 351, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "C:\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 351, in
I cannot reproduce your problem. you may try to completely remove controlnet and install again.
I cannot reproduce your problem. you may try to completely remove controlnet and install again.
Is it because I'm not using the most recent version of A1111? I'm using an older commit.
I'm going to say that's very likely why. I'm on the latest commit and can't reproduce this either.
I'll update my A1111 and see if that fixes it. Hopefully it does.
I cannot reproduce your problem. you may try to completely remove controlnet and install again.
Is it because I'm not using the most recent version of A1111? I'm using an older commit.
does this problem only with shuffle or all models
I cannot reproduce your problem. you may try to completely remove controlnet and install again.
Is it because I'm not using the most recent version of A1111? I'm using an older commit.
does this problem only with shuffle or all models
I've only tried Shuffle. I haven't used the others yet, but I just updated A1111.
Nope, still getting the error
are you using special gpu flags in a1111?
are you using special gpu flags in a1111?
Yes, --precision full --no-half --lowvram
let me try
yes it is broken with --lowvram I reproduced it. working on it now
yes it is broken with --lowvram I reproduced it. working on it now
Awesome, thank you <3 I'm glad we figured out what triggered it.
unfortunately --lowvram does not work with shuffle or guess mode. it is impossible to fix. added a error log
unfortunately --lowvram does not work with shuffle or guess mode. it is impossible to fix. added a error log
Does --medvram work with it?
update and restart everything completely including your terminal
BTW instead of having everybody restart completely the webui, consider adding this at the top of controlnet.py
:
from scripts import ..., hook, ...
importlib.reload(hook)
Make sure to put the reload before any hook import like ControlParams and UnetHook.
Basically we need to importlib reload every python module that we import and that is expected to change, so that the webui reloads them when we reload the gradio interface.
Edit: done in #787
Now I'm getting this error:
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue. import torch torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.benchmark = True torch.backends.cudnn.deterministic = False torch.backends.cudnn.allow_tf32 = True data = torch.randn([1, 3, 512, 512], dtype=torch.float, device='cuda', requires_grad=True) net = torch.nn.Conv2d(3, 16, kernel_size=[3, 3], padding=[1, 1], stride=[1, 1], dilation=[1, 1], groups=1) net = net.cuda().float() out = net(data) out.backward(torch.randn_like(out)) torch.cuda.synchronize() ConvolutionParams memory_format = Contiguous data_type = CUDNN_DATA_FLOAT padding = [1, 1, 0] stride = [1, 1, 0] dilation = [1, 1, 0] groups = 1 deterministic = false allow_tf32 = true input: TensorDescriptor 0000018A17ACC8A0 type = CUDNN_DATA_FLOAT nbDims = 4 dimA = 1, 3, 512, 512, strideA = 786432, 262144, 512, 1, output: TensorDescriptor 0000018A17ACCD00 type = CUDNN_DATA_FLOAT nbDims = 4 dimA = 1, 16, 512, 512, strideA = 4194304, 262144, 512, 1, weight: FilterDescriptor 0000018A13BF49D0 type = CUDNN_DATA_FLOAT tensor_format = CUDNN_TENSOR_NCHW nbDims = 4 dimA = 16, 3, 3, 3, Pointer addresses: input: 000000075CC00000 output: 000000084F800000 weight: 00000007097FC800
Oh, also, my GPU is only 6GB. Will that affect it? That's why I had --lowvram on :( I have a 1660Ti.
@bropines If I recall, there was a separate project that converted manga screentones to be flat colors, though I may be confusing it with something else. If I come across it again I'll share it here. I think for your particular example though it's having trouble due to the middle grey values, those values aren't going to matter whether they're inverted or not. Screentones don't seem to affect it that much in my testing unless they're middle grey like your example.
Edit: Not the one I was thinking of but might be useful. https://github.com/natethegreate/Screentone-Remover
lllyasviel might be able to chime in more on this. Ultimately someone could just take b/w and color manga and make a cnet, but procuring a dataset for that sounds like a task of it's own.
So, Clip_vision doesn't work for me now :( It worked when I had the commit I was using before, but now I'm getting this error:
Error running process: C:\ai\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "C:\ai\stable-diffusion-webui\modules\scripts.py", line 417, in process script.process(p, script_args) File "C:\ai\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 776, in process fake_detected_map = np.ndarray((detected_map.shape[0]4, detected_map.shape[1]),dtype="uint8",buffer=detected_map.numpy(force=True).tobytes()) TypeError: Tensor.numpy() takes no keyword arguments
Really frustrating that updating made it so I can't use Clip_vision anymore either :( I don't even know how I would go back to previous versions that actually worked.
i am taking a look into it now
For anybody running into unexpected errors with controlnet 1.1, you can roll back to the commit prior to the update:
git reset --hard 0f549888fd49aea48a4a5049f75c2e87ad3affad
And then you can always update controlnet normally when the main branch is more stable.
IIUC this is not a perfect solution. We are not squashing commits when merging into main, so different commits from different branches end up interleaved, making it hard to revert back to an older version for some of the commits IIUC. I tested the commit above and it seems to work with clip_vision but I don't know if anything else breaks.
For anybody running into unexpected errors with controlnet 1.1, you can roll back to the commit prior to the update:
git reset --hard 0f549888fd49aea48a4a5049f75c2e87ad3affad
And then you can always update controlnet normally when the main branch is more stable.
IIUC this is not a perfect solution. We are not squashing commits when merging into main, so different commits from different branches end up interleaved, making it hard to revert back to an older version for some of the commits IIUC. I tested the commit above and it seems to work with clip_vision but I don't know if anything else breaks.
How do you do this? How do you do that git reset thing?
How do you do this? How do you do that git reset thing?
cmd
in place of the full path
2.b you can also go to "file" > "open windows powershell" in windows 10 file explorerThat should do it.
can anyone actually reproduce the problem with clip_vison? it seems to work on my side well
I can try to reproduce. What model takes clip_vision as input?
Ah I found it. Let me grab a T2I adapter on huggingface.
t2i adapter style. and let me try more a1111 flags
@MadaraxUchiha88 What webui flags are you using? Maybe should create an issue to not clutter this thread too much.
@MadaraxUchiha88 What webui flags are you using? Maybe should create an issue to not clutter this thread too much.
I tried creating an issue but it was a lot of questions :( As for flags, I'm using --precision full --no-half --lowvram
a1111 "--lowvram" uses a special input shape. usually the input is [2, 4, 64, 64] but "--lowvram" use two [1, 4, 64, 64] and controlnet do not know which [1, 4, 64, 64] is uncond. it is more difficult to handle
and actually i do not even know the shape flow of previous clip vision
@Mikubill we need you!
@MadaraxUchiha88 What webui flags are you using? Maybe should create an issue to not clutter this thread too much.
I tried creating an issue but it was a lot of questions :( As for flags, I'm using --precision full --no-half --lowvram
Can a 6GB device works with —medvram?If not which step will it give you errors?
a1111 "--lowvram" uses a special input shape. usually the input is [2, 4, 64, 64] but "--lowvram" use two [1, 4, 64, 64] and controlnet do not know which [1, 4, 64, 64] is uncond. it is more difficult to handle
I use a complex hack trick to determine the actual cond and uncond. Here on the 191 line: https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111/blob/main/tile_methods/abstractdiffusion.py
but i tried with a 6GB machine with lowvarm everything is ok, no error
Python 3.10.8 | packaged by conda-forge | (main, Nov 24 2022, 14:07:00) [MSC v.1916 64 bit (AMD64)] Commit hash: 22bcc7be428c94e9408f589966c2040187245d81 Installing requirements for Web UI
Launching Web UI with arguments: --precision full --no-half --lowvram No module 'xformers'. Proceeding without it. Loading weights [cc6cb27103] from D:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.ckpt Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Applying cross attention optimization (Doggettx). Textual inversion embeddings loaded(0): Model loaded in 1.8s (load weights from disk: 1.1s, create model: 0.3s, apply weights to model: 0.4s). Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 9.4s (import torch: 3.0s, import gradio: 0.7s, import ldm: 0.4s, other imports: 0.9s, load scripts: 1.6s, load SD checkpoint: 2.1s, create ui: 0.6s, gradio launch: 0.1s).
Loading model: t2iadapter_style_sd14v1 [202e85cc]
Loaded state_dict from [D:\stable-diffusion-webui\extensions\sd-webui-controlnet\models\t2iadapter_style_sd14v1.pth]
ControlNet model t2iadapter_style_sd14v1 [202e85cc] loaded.
Loading preprocessor: clip_vision
5%|▌ | 1/20 [00:03<01:11, 3.75s/it]
Total progress: 0%| | 0/20 [00:00<?, ?it/s]
Total progress: 10%|█ | 2/20 [00:02<00:25, 1.41s/it]
20%|██ | 4/20 [00:12<00:46, 2.92s/it]
Total progress: 20%|██ | 4/20 [00:08<00:36, 2.28s/it]
Total progress: 25%|██▌ | 5/20 [00:11<00:36, 2.46s/it]
I cannot detect any problem?
@MadaraxUchiha88 can you share the full log of clip vision fail?
@MadaraxUchiha88 can you share the full log of clip vision fail?
It's this one:
Error running process: C:\ai\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "C:\ai\stable-diffusion-webui\modules\scripts.py", line 417, in process script.process(p, script_args) File "C:\ai\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 776, in process fake_detected_map = np.ndarray((detected_map.shape[0]4, detected_map.shape[1]),dtype="uint8",buffer=detected_map.numpy(force=True).tobytes()) TypeError: Tensor.numpy() takes no keyword arguments
let me take a look
We will use this repo to track some discussions for updating to ControlNet 1.1.