lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
40.85k stars 5.72k forks source link

RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. #184

Closed ZeroCool22 closed 1 year ago

ZeroCool22 commented 1 year ago
C:\Users\ZeroCool22\Desktop\FS>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 1.0.33
Inference Engine exists.
Inference Engine checkout finished.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Total VRAM 11264 MB, total RAM 32680 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1080 Ti : cudaMallocAsync
Using xformers cross attention
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: sd_xl_base_1.0_0.9vae.safetensors
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\async_worker.py", line 14, in worker
    import modules.default_pipeline as pipeline
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\default_pipeline.py", line 102, in <module>
    refresh_refiner_model(modules.path.default_refiner_model_name)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\default_pipeline.py", line 66, in refresh_refiner_model
    xl_refiner = core.load_model(filename)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\core.py", line 41, in load_model
    unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 1200, in load_checkpoint_guess_config
    model = model_config.get_model(sd, "model.diffusion_model.", device=offload_device)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\supported_models.py", line 113, in get_model
    return model_base.SDXLRefiner(self, device=device)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_base.py", line 152, in __init__
    super().__init__(model_config, model_type, device=device)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_base.py", line 22, in __init__
    self.diffusion_model = UNetModel(**unet_config, device=device)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 492, in __init__
    ResBlock(
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 174, in __init__
    conv_nd(dims, channels, self.out_channels, 3, padding=1, dtype=dtype, device=device),
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ldm\modules\diffusionmodules\util.py", line 236, in conv_nd
    return comfy.ops.Conv2d(*args, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 450, in __init__
    super().__init__(
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 137, in __init__
    self.weight = Parameter(torch.empty(
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 42467328 bytes.

PC SPECS:

Win. 10 5900X 32gb of RAM 1080 TI (11gb VRAM)

How much memory it want lol...

lllyasviel commented 1 year ago

can you try this?https://user-images.githubusercontent.com/19834515/260322660-2a06b130-fe9b-4504-94f1-2763be4476e9.png

ZeroCool22 commented 1 year ago

can you try this?https://user-images.githubusercontent.com/19834515/260322660-2a06b130-fe9b-4504-94f1-2763be4476e9.png

Same error:

C:\Users\ZeroCool22\Desktop\FS>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Fast-forward merge
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 1.0.35
Inference Engine exists.
Inference Engine checkout finished.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Total VRAM 11264 MB, total RAM 32680 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1080 Ti : cudaMallocAsync
Using xformers cross attention
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: sd_xl_base_1.0_0.9vae.safetensors
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
model_type EPS
adm 2560
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\async_worker.py", line 14, in worker
    import modules.default_pipeline as pipeline
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\default_pipeline.py", line 102, in <module>
    refresh_refiner_model(modules.path.default_refiner_model_name)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\default_pipeline.py", line 66, in refresh_refiner_model
    xl_refiner = core.load_model(filename)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\core.py", line 41, in load_model
    unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 1212, in load_checkpoint_guess_config
    clip = CLIP(clip_target, embedding_directory=embedding_directory)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 518, in __init__
    self.cond_stage_model = clip(**(params))
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sdxl_clip.py", line 75, in __init__
    self.clip_g = SDXLClipG(device=device)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sdxl_clip.py", line 12, in __init__
    super().__init__(device=device, freeze=freeze, layer=layer, layer_idx=layer_idx, textmodel_json_config=textmodel_json_config, textmodel_path=textmodel_path)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd1_clip.py", line 59, in __init__
    self.transformer = CLIPTextModel(config)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 782, in __init__
    self.text_model = CLIPTextTransformer(config)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 699, in __init__
    self.embeddings = CLIPTextEmbeddings(config)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 209, in __init__
    self.token_embedding = nn.Embedding(config.vocab_size, embed_dim)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\torch\nn\modules\sparse.py", line 142, in __init__
    self.weight = Parameter(torch.empty((num_embeddings, embedding_dim), **factory_kwargs),
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 252968960 bytes.
hanetyb commented 1 year ago

i raised my mem into 24G, but same issue, i manually set swap mem in D driver, its resolved, pls try. btw, you should set swap memory at same driver where fooocus locates,

lllyasviel commented 1 year ago

i raised my mem into 24G, but same issue, i manually set swap mem in D driver, its resolved, pls try. btw, you should set swap memory at same driver where fooocus locates,

oh does that matter?

fillsok commented 1 year ago

can you try this?https://user-images.githubusercontent.com/19834515/260322660-2a06b130-fe9b-4504-94f1-2763be4476e9.png

按照你的设置做了,还是显示CPU内存不够。 16G内存 3060TI -8G

model_type EPS adm 2560 making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Exception in thread Thread-2 (worker): Traceback (most recent call last): File "threading.py", line 1016, in _bootstrap_inner File "threading.py", line 953, in run File "D:\Fooocus_win64_1-1-10\Fooocus\modules\async_worker.py", line 14, in worker import modules.default_pipeline as pipeline File "D:\Fooocus_win64_1-1-10\Fooocus\modules\default_pipeline.py", line 102, in refresh_refiner_model(modules.path.default_refiner_model_name) File "D:\Fooocus_win64_1-1-10\Fooocus\modules\default_pipeline.py", line 66, in refresh_refiner_model xl_refiner = core.load_model(filename) File "D:\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "D:\Fooocus_win64_1-1-10\Fooocus\modules\core.py", line 41, in load_model unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename) File "D:\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 1204, in load_checkpoint_guess_config vae = VAE() File "D:\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 585, in init self.first_stage_model = AutoencoderKL(ddconfig, {'target': 'torch.nn.Identity'}, 4, monitor="val/rec_loss") File "D:\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ldm\models\autoencoder.py", line 30, in init self.decoder = Decoder(ddconfig) File "D:\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ldm\modules\diffusionmodules\model.py", line 693, in init up.upsample = Upsample(block_in, resamp_with_conv) File "D:\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ldm\modules\diffusionmodules\model.py", line 52, in init self.conv = comfy.ops.Conv2d(in_channels, File "D:\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 450, in init super().init( File "D:\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 137, in init self.weight = Parameter(torch.empty( RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9437184 bytes.

lllyasviel commented 1 year ago

how many free space do you have on your hard disk?

fillsok commented 1 year ago

how many free space do you have on your hard disk?

~500G

ZeroCool22 commented 1 year ago

how many free space do you have on your hard disk?

Screenshot_5 Screenshot_4

The Fooocus is on C SSD.

lllyasviel commented 1 year ago

I will take a look at this later - this is unexpected and they should have worked

mryuze commented 1 year ago

你能试试这个吗?https://user-images.githubusercontent.com/19834515/260322660-2a06b130-fe9b-4504-94f1-2763be4476e9.png

相同的错误:

C:\Users\ZeroCool22\Desktop\FS>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Fast-forward merge
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 1.0.35
Inference Engine exists.
Inference Engine checkout finished.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Total VRAM 11264 MB, total RAM 32680 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1080 Ti : cudaMallocAsync
Using xformers cross attention
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: sd_xl_base_1.0_0.9vae.safetensors
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
model_type EPS
adm 2560
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\async_worker.py", line 14, in worker
    import modules.default_pipeline as pipeline
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\default_pipeline.py", line 102, in <module>
    refresh_refiner_model(modules.path.default_refiner_model_name)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\default_pipeline.py", line 66, in refresh_refiner_model
    xl_refiner = core.load_model(filename)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\modules\core.py", line 41, in load_model
    unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 1212, in load_checkpoint_guess_config
    clip = CLIP(clip_target, embedding_directory=embedding_directory)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 518, in __init__
    self.cond_stage_model = clip(**(params))
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sdxl_clip.py", line 75, in __init__
    self.clip_g = SDXLClipG(device=device)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sdxl_clip.py", line 12, in __init__
    super().__init__(device=device, freeze=freeze, layer=layer, layer_idx=layer_idx, textmodel_json_config=textmodel_json_config, textmodel_path=textmodel_path)
  File "C:\Users\ZeroCool22\Desktop\FS\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd1_clip.py", line 59, in __init__
    self.transformer = CLIPTextModel(config)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 782, in __init__
    self.text_model = CLIPTextTransformer(config)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 699, in __init__
    self.embeddings = CLIPTextEmbeddings(config)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 209, in __init__
    self.token_embedding = nn.Embedding(config.vocab_size, embed_dim)
  File "C:\Users\ZeroCool22\Desktop\FS\python_embeded\lib\site-packages\torch\nn\modules\sparse.py", line 142, in __init__
    self.weight = Parameter(torch.empty((num_embeddings, embedding_dim), **factory_kwargs),
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 252968960 bytes.

The key is that this is not the fundamental solution to the problem, 32g of memory is not enough, but also with virtual memory, appetite is really big ah

ludashi6789 commented 1 year ago

it worked success! If the directory you installed is on the C drive, then set the cache to the C drive. If the directory you installed is on the D drive, install the cache on the D drive. 微信截图_20230820085627

ZeroCool22 commented 1 year ago

UPDATE:

It works now with the last update.

But, still too slow for my GPU.