csslc / CCSR

Official codes of CCSR: Improving the Stability of Diffusion Models for Content Consistent Super-Resolution
https://csslc.github.io/project-CCSR/
432 stars 32 forks source link

Preset picture test error #30

Open 12qew opened 2 months ago

12qew commented 2 months ago

I test according to the pictures under your Preset folder. The following errors are in the program running results. What is the reason? Thank you for your reply.

Seed set to 233 using device cuda ControlLDM: Running in eps-prediction mode Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. DiffusionWrapper has 865.91 M params. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... /home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( /home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. timesteps used in spaced sampler: [0, 23, 45, 68, 91, 114, 136, 159, 182, 204, 227, 250, 272, 295, 318, 341, 363, 386, 409, 431, 454, 477, 499, 522, 545, 568, 590, 613, 636, 658, 681, 704, 727, 749, 772, 795, 817, 840, 863, 885, 908, 931, 954, 976, 999] Spaced Sampler: 0%| | 0/45 [00:00<?, ?it/s]WARNING:xformers:Blocksparse is not available: the current GPU does not expose Tensor cores Traceback (most recent call last): File "/home/aaa/Diffusion/CCSR-main/inference_ccsr.py", line 213, in main() File "/home/aaa/Diffusion/CCSR-main/inference_ccsr.py", line 193, in main preds = process( File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/aaa/Diffusion/CCSR-main/inference_ccsr.py", line 69, in process samples = sampler.sample_ccsr( File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/aaa/Diffusion/CCSR-main/model/q_sampler.py", line 1010, in sample_ccsr "c_crossattn": [self.model.get_learned_conditioning([positive_prompt] b)] File "/home/aaa/Diffusion/CCSR-main/ldm/models/diffusion/ddpm_ccsr_stage2.py", line 812, in get_learned_conditioning c = self.cond_stage_model.encode(c) File "/home/aaa/Diffusion/CCSR-main/ldm/modules/encoders/modules.py", line 195, in encode return self(text) File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/aaa/Diffusion/CCSR-main/ldm/modules/encoders/modules.py", line 172, in forward z = self.encode_with_transformer(tokens.to(next(self.model.parameters()).device)) File "/home/aaa/Diffusion/CCSR-main/ldm/modules/encoders/modules.py", line 179, in encode_with_transformer x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask) File "/home/aaa/Diffusion/CCSR-main/ldm/modules/encoders/modules.py", line 191, in text_transformer_forward x = r(x, attn_mask=attn_mask) File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/open_clip/transformer.py", line 263, in forward x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask)) File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/open_clip/transformer.py", line 250, in attention return self.attn( File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torch/nn/modules/activation.py", line 1158, in forward merged_mask, mask_type = self.merge_masks(attn_mask, key_padding_mask, query) File "/home/aaa/anaconda3/envs/ccsr/lib/python3.9/site-packages/torch/nn/modules/activation.py", line 1264, in merge_masks attn_mask_expanded = attn_mask.view(1, 1, seq_len, seq_len).expand(batch_size, self.num_heads, -1, -1) RuntimeError: shape '[1, 1, 1, 1]' is invalid for input of size 5929 Spaced Sampler: 0%| | 0/45 [00:02<?, ?it/s]

xiutian51 commented 1 month ago

I also happen the same issue

xunfeng1980 commented 1 month ago

+1

SIM-TOO commented 3 weeks ago

I resolved the issue by upgrading to CUDA version 12.5. conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=12.1 -c pytorch -c nvidia export CUDA_HOME=/usr/local/cuda-12.5 export PATH=$CUDA_HOME/bin:$PATH export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH