ForserX / StableDiffusionUI

Stable Diffusion UI: Diffusers (CUDA/ONNX)
https://discord.gg/HMG82cYNrA
GNU General Public License v3.0
125 stars 14 forks source link

how to fix this error #22

Closed CHETHAN562 closed 1 year ago

CHETHAN562 commented 1 year ago

Host started...

RTL be like: 22621.10.0 Name - NVIDIA GeForce GTX 1650 DeviceID - VideoController1 AdapterRAM - 4293918720 AdapterDACType - Integrated RAMDAC Monochrome - False DriverVersion - 31.0.15.3179 VideoProcessor - NVIDIA GeForce GTX 1650 VideoArchitecture - 5 VideoMemoryType - 2 ${Workspace}\repo\cuda.venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. You have passed a non-standard module StableDiffusionSafetyChecker( (vision_model): CLIPVisionModel( (vision_model): CLIPVisionTransformer( (embeddings): CLIPVisionEmbeddings( (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False) (position_embedding): Embedding(257, 1024) ) (pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (encoder): CLIPEncoder( (layers): ModuleList( (0-23): 24 x CLIPEncoderLayer( (self_attn): CLIPAttention( (k_proj): Linear(in_features=1024, out_features=1024, bias=True) (v_proj): Linear(in_features=1024, out_features=1024, bias=True) (q_proj): Linear(in_features=1024, out_features=1024, bias=True) (out_proj): Linear(in_features=1024, out_features=1024, bias=True) ) (layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): CLIPMLP( (activation_fn): QuickGELUActivation() (fc1): Linear(in_features=1024, out_features=4096, bias=True) (fc2): Linear(in_features=4096, out_features=1024, bias=True) ) (layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) ) (post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) (visual_projection): Linear(in_features=1024, out_features=768, bias=False) ). We cannot verify whether it has the correct type Current device: cuda txt2img Traceback (most recent call last): File "${Workspace}\repo\diffusion_scripts\sd_cuda_safe.py", line 24, in pipe.to(PipeDevice.device, fptype) File "${Workspace}\repo\cuda.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 670, in to module.to(torch_device, torch_dtype) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) [Previous line repeated 3 more times] File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) You have passed a non-standard module StableDiffusionSafetyChecker( (vision_model): CLIPVisionModel( (vision_model): CLIPVisionTransformer( (embeddings): CLIPVisionEmbeddings( (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False) (position_embedding): Embedding(257, 1024) ) (pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (encoder): CLIPEncoder( (layers): ModuleList( (0-23): 24 x CLIPEncoderLayer( (self_attn): CLIPAttention( (k_proj): Linear(in_features=1024, out_features=1024, bias=True) (v_proj): Linear(in_features=1024, out_features=1024, bias=True) (q_proj): Linear(in_features=1024, out_features=1024, bias=True) (out_proj): Linear(in_features=1024, out_features=1024, bias=True) ) (layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): CLIPMLP( (activation_fn): QuickGELUActivation() (fc1): Linear(in_features=1024, out_features=4096, bias=True) (fc2): Linear(in_features=4096, out_features=1024, bias=True) ) (layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) ) (post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) (visual_projection): Linear(in_features=1024, out_features=768, bias=False) ). We cannot verify whether it has the correct type ${Workspace}\repo\cuda.venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. Current device: cuda txt2img Traceback (most recent call last): File "${Workspace}\repo\diffusion_scripts\sd_cuda_safe.py", line 24, in pipe.to(PipeDevice.device, fptype) File "${Workspace}\repo\cuda.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 670, in to module.to(torch_device, torch_dtype) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) [Previous line repeated 2 more times] File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

ForserX commented 1 year ago

@Borshig

ForserX commented 1 year ago

You use Fp16 or Fp32?

Borshig commented 1 year ago

Can you describe situation. What did you do?

CHETHAN562 commented 1 year ago

I think I used Fp16 sampler realisticvision 1.3 model While converting to onnx it key infrancing missing from model I used python 3.10.9 and Cuda 12