AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.42k stars 27.01k forks source link

[Bug]: cannot use on my old pc #9349

Closed zarigata closed 1 year ago

zarigata commented 1 year ago

Is there an existing issue for this?

What happened?

i was trying to run on my server witch has 64 GBS of ram, but every time it gives meRuntimeError: "log_vml_cpu" not implemented for 'Half'

Steps to reproduce the problem

run the code with ./webui.sh

What should have happened?

run?

Commit where the problem happens

commit: 22bcc7be

What platforms do you use to access the UI ?

Windows, Linux

What browsers do you use to access the UI ?

Google Chrome, Brave

Command Line Arguments

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--skip-torch-cuda-test --share"

List of extensions

No

Console logs

Creating model from config: /home/carlos/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (InvokeAI).
Textual inversion embeddings loaded(0):
Model loaded in 27.3s (load weights from disk: 0.7s, create model: 3.3s, apply weights to model: 4.3s, apply half(): 18.8s).
Running on local URL:  http://127.0.0.1:7860
Running on public URL: https://fcdcf4a07b5b537dde.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Startup time: 43.4s (import torch: 2.4s, import gradio: 2.5s, import ldm: 1.2s, other imports: 1.3s, setup codeformer: 0.2s, load scripts: 1.1s, load SD checkpoint: 27.4s, create ui: 0.7s, gradio launch: 6.6s).
https://fcdcf4a07b5b537dde.gradio.live
https://fcdcf4a07b5b537dde.gradio.liveError completing request
Arguments: ('task(8czgebxlzcrpg1q)', 'delorean', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/carlos/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 642, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 587, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "/home/carlos/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "/home/carlos/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "/home/carlos/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "/home/carlos/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
    return self.text_model(
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 378, in forward
    hidden_states = self.layer_norm1(hidden_states)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

Error completing request
Arguments: ('task(vf104p0c4s8vku0)', 'delorean', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/carlos/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 642, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 587, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "/home/carlos/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "/home/carlos/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "/home/carlos/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "/home/carlos/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
    return self.text_model(
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 378, in forward
    hidden_states = self.layer_norm1(hidden_states)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

Error completing request
Arguments: ('task(7mk2hqp002y5va1)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=736x736 at 0x7FA3BFB52C50>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/carlos/stable-diffusion-webui/modules/img2img.py", line 172, in img2img
    processed = process_images(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 594, in process_images_inner
    p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 971, in init
    self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers.py", line 25, in create_sampler
    sampler = config.constructor(model)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 34, in <lambda>
    sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 203, in __init__
    self.model_wrap = denoiser(sd_model, quantize=shared.opts.enable_quantization)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 135, in __init__
    super().__init__(model, model.alphas_cumprod, quantize=quantize)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 92, in __init__
    super().__init__(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, quantize)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 48, in __init__
    self.register_buffer('log_sigmas', sigmas.log())
RuntimeError: "log_vml_cpu" not implemented for 'Half'

Error completing request
Arguments: ('task(v4ew9gceokapb4e)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=736x736 at 0x7FA3BFB51D50>, None, None, None, None, None, None, 20, 17, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/carlos/stable-diffusion-webui/modules/img2img.py", line 172, in img2img
    processed = process_images(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 594, in process_images_inner
    p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 1056, in init
    self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image))
  File "/home/carlos/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/home/carlos/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/carlos/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 830, in encode_first_stage
    return self.first_stage_model.encode(x)
  File "/home/carlos/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py", line 83, in encode
    h = self.encoder(x)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 523, in forward
    hs = [self.conv_in(x)]
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/carlos/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 319, in lora_Conv2d_forward
    return torch.nn.Conv2d_forward_before_lora(self, input)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/carlos/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (float) and bias type (c10::Half) should be the same

Error completing request
Arguments: ('task(7srv0g2jzaegzws)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=736x736 at 0x7FA3BF1B2FE0>, None, None, None, None, None, None, 20, 1, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/carlos/stable-diffusion-webui/modules/img2img.py", line 172, in img2img
    processed = process_images(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 594, in process_images_inner
    p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 971, in init
    self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers.py", line 25, in create_sampler
    sampler = config.constructor(model)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 34, in <lambda>
    sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 203, in __init__
    self.model_wrap = denoiser(sd_model, quantize=shared.opts.enable_quantization)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 135, in __init__
    super().__init__(model, model.alphas_cumprod, quantize=quantize)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 92, in __init__
    super().__init__(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, quantize)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 48, in __init__
    self.register_buffer('log_sigmas', sigmas.log())
RuntimeError: "log_vml_cpu" not implemented for 'Half'

Error completing request
Arguments: ('task(u3v3oqtmbo5n9hd)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=736x736 at 0x7FA3BF1DAAD0>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/carlos/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/carlos/stable-diffusion-webui/modules/img2img.py", line 172, in img2img
    processed = process_images(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 594, in process_images_inner
    p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
  File "/home/carlos/stable-diffusion-webui/modules/processing.py", line 971, in init
    self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers.py", line 25, in create_sampler
    sampler = config.constructor(model)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 34, in <lambda>
    sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
  File "/home/carlos/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 203, in __init__
    self.model_wrap = denoiser(sd_model, quantize=shared.opts.enable_quantization)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 135, in __init__
    super().__init__(model, model.alphas_cumprod, quantize=quantize)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 92, in __init__
    super().__init__(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, quantize)
  File "/home/carlos/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 48, in __init__
    self.register_buffer('log_sigmas', sigmas.log())
RuntimeError: "log_vml_cpu" not implemented for 'Half'

Additional information

it is a server with xenon CPU

Lithium0408 commented 1 year ago

您好,邮件已经收到,现在无法亲自回复您的邮件。我将稍后给您确认回复。

pangbo13 commented 1 year ago

maybe try --precision full --no-half ?

levicki commented 1 year ago

it is a server with xenon CPU

No such thing as xenon CPU.

If you meant Intel Xeon, then there are literally hundreds of Xeon variants so saying "it's a xeon CPU" doesn't really say anything useful.

Please run the following command:

cat /proc/cpuinfo

And paste the output here as a code block.

Rayregula commented 1 year ago

I am running SD on a pair of Xeon x5650's with the --no-half flag (This is a system without cuda)

Edit: I don't remember the error I got before disabling half but can check if it's needed

levicki commented 1 year ago

@Rayregula Frankly I don't understand what you expect to accomplish by trying to run this cutting edge software on a 13 year old CPU?

It doesn't even support AVX2 instruction set extensions, and most (if not all) 64-bit software nowadays (at least on Windows, not sure about Linux) is compiled with AVX2 enabled.

Even if that isn't a problem, and you somehow manage to get it to run, the resulting performance will probably be abysmal.

zarigata commented 1 year ago

@Rayregula it worked, kinda, but it helped i can at least show some way that the WEBGUI works... maybe i will be able to show that what @levicki said... it is a company that wants to make images out of test to make the editors have a base to what to draw.... but my boss is that kinda of person that asks why is it a server cant render a CUDA application... because he is a master in everyway... i just needed confirmation that i wasn't doing anything wrong... also i have the same CPU as you... i remember, cant really see because they are now on standby to be upgraded, so they are offline

levicki commented 1 year ago

but my boss is that kinda of person that asks why is it a server cant render a CUDA application...

My condolences to anyone who has to deal with such people -- they are horribly ignorant and happy to stay that way because what they don't know can't be blamed on them.

Rayregula commented 1 year ago

@Rayregula Frankly I don't understand what you expect to accomplish by trying to run this cutting edge software on a 13 year old CPU?

It doesn't even support AVX2 instruction set extensions, and most (if not all) 64-bit software nowadays (at least on Windows, not sure about Linux) is compiled with AVX2 enabled.

Even if that isn't a problem, and you somehow manage to get it to run, the resulting performance will probably be abysmal.

I don't understand your comment. I am not trying to run it, I have been running it without issue for months. (this is not my thread, I was just confirming to the OP that it can run on an older CPU)

I run it on this hardware because I don't have a GPU to use with it, My Desktop does have a 1050ti (can't afford to upgrade yet) but I hit that hard enough and often enough from running games to Unreal engine or Blender rendering that even if I could find a way to load the models into ram instead of vram while still using CUDA it's easier to just have it on a different system then my "production" machine.

But yes, it is quite slow (running on linux so AVX2 isn't an issue (if it would have been otherwise))

Rayregula commented 1 year ago

@Rayregula it worked, kinda, but it helped i can at least show some way that the WEBGUI works... maybe i will be able to show that what @levicki said... it is a company that wants to make images out of test to make the editors have a base to what to draw.... but my boss is that kinda of person that asks why is it a server cant render a CUDA application... because he is a master in everyway... i just needed confirmation that i wasn't doing anything wrong... also i have the same CPU as you... i remember, cant really see because they are now on standby to be upgraded, so they are offline

Glad to hear it!