Stability-AI / stablediffusion

High-Resolution Image Synthesis with Latent Diffusion Models
MIT License
37.8k stars 4.88k forks source link

Getting error on run #132

Open Ccode-lang opened 1 year ago

Ccode-lang commented 1 year ago
Global seed set to 64
Loading model from v2-1_768-ema-pruned.ckpt
Global Step: 110000
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Creating invisible watermark encoder (see https://github.com/ShieldMnt/invisible-watermark)...
Sampling:   0%|                                                                                  | 0/1 [00:00<?, ?it/s]Data shape for PLMS sampling is (1, 4, 64, 64)                                                    | 0/1 [00:00<?, ?it/s]
Running PLMS Sampling with 50 timesteps
PLMS Sampler:   0%|                                                                             | 0/50 [00:00<?, ?it/s]
data:   0%|                                                                                      | 0/1 [00:02<?, ?it/s]
Sampling:   0%|                                                                                  | 0/1 [00:02<?, ?it/s]
Traceback (most recent call last):
  File "scripts/txt2img.py", line 289, in <module>
    main(opt)
  File "scripts/txt2img.py", line 248, in main
    samples, _ = sampler.sample(S=opt.steps,
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "c:\users\cooper lynn\stablediffusion\ldm\models\diffusion\plms.py", line 99, in sample
    samples, intermediates = self.plms_sampling(conditioning, size,
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "c:\users\cooper lynn\stablediffusion\ldm\models\diffusion\plms.py", line 156, in plms_sampling
    outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "c:\users\cooper lynn\stablediffusion\ldm\models\diffusion\plms.py", line 226, in p_sample_plms
    e_t = get_model_output(x, t)
  File "c:\users\cooper lynn\stablediffusion\ldm\models\diffusion\plms.py", line 191, in get_model_output
    e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
  File "c:\users\cooper lynn\stablediffusion\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "c:\users\cooper lynn\stablediffusion\ldm\models\diffusion\ddpm.py", line 1329, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "c:\users\cooper lynn\stablediffusion\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward
    h = module(h, emb, context)
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "c:\users\cooper lynn\stablediffusion\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
    x = layer(x)
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\Cooper Lynn\.conda\envs\ldm2\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

Not really sure what is wrong here.

Ankit2527 commented 1 year ago

Hi @Ccode-lang did you manage to resolve the issue? I am also facing the same error.

Ccode-lang commented 1 year ago

I used transformers then it was fixed

Ccode-lang commented 1 year ago

I still have to see if this error still exists.

UTimeStrange commented 1 year ago

same issue