Open dhwz opened 11 months ago
Added casts to actual VAE dtype, now it should work.
@wcde Sadly no it's still throwing out the same error. Afaik I've already tried to fix it like you did before. I was just running out of ideas what could be wrong.
*** Error running postprocess_image: /home/dragon/stable-diffusion-webui/extensions/custom-hires-fix-for-automatic1111/scripts/custom_hires_fix.py
Traceback (most recent call last):
File "/home/dragon/stable-diffusion-webui/modules/scripts.py", line 575, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "/home/dragon/stable-diffusion-webui/extensions/custom-hires-fix-for-automatic1111/scripts/custom_hires_fix.py", line 182, in postprocess_image
x = self.filter(x)
File "/home/dragon/stable-diffusion-webui/extensions/custom-hires-fix-for-automatic1111/scripts/custom_hires_fix.py", line 300, in filter
encoded_sample = shared.sd_model.encode_first_stage(decoded_sample.unsqueeze(0).to(devices.dtype_vae))
File "/home/dragon/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/dragon/stable-diffusion-webui/repositories/generative-models/sgm/models/diffusion.py", line 127, in encode_first_stage
z = self.first_stage_model.encode(x)
File "/home/dragon/stable-diffusion-webui/modules/lowvram.py", line 50, in first_stage_model_encode_wrap
return first_stage_model_encode(x)
File "/home/dragon/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 321, in encode
return super().encode(x).sample()
File "/home/dragon/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 309, in encode
moments = self.quant_conv(h)
File "/home/dragon/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dragon/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 376, in network_Conv2d_forward
return torch.nn.Conv2d_forward_before_network(self, input)
File "/home/dragon/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/dragon/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
I'm getting this error:
@wcde just some note I found out it's not happening always it's depending on upscale resolution, e.g. 1024 > 1536 is working, 1024 > 2048 gives the above error. But definitely a conversion error for encoded_sample, I tried to fix it but I couldn't find the correct way to solve it.