vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.68k stars 421 forks source link

[Issue]: "Input type (struct c10::Half) and bias type (float) should be the same" on txt2img #1653

Closed pinanew closed 1 year ago

pinanew commented 1 year ago

Issue Description

Click on Generate button and instant error. Everything worked on 4152c2049bf94b7e2deae585586d8801198d160e. I think it's connected with my settings: изображение


Using VENV: d:\Programs\automatic\venv
16:00:25-163541 INFO     Starting SD.Next
16:00:25-179548 INFO     Python 3.10.11 on Windows
16:00:25-456747 INFO     Version: c3a4293f Wed Jul 12 13:02:42 2023 +0300
16:00:26-422041 DEBUG    Setting environment tuning
16:00:26-425218 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False
16:00:26-428219 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True
16:00:26-434215 INFO     nVidia CUDA toolkit detected
16:00:36-003270 INFO     Torch 2.0.1+cu118
16:00:36-036885 INFO     Torch backend: nVidia CUDA 11.8 cuDNN 8700
16:00:36-039889 INFO     Torch detected GPU: NVIDIA GeForce GTX 1060 6GB VRAM 6144 Arch (6, 1) Cores 10
16:00:36-518539 WARNING  Modified files: ['modules/lora', 'modules/lycoris']
16:00:36-614648 DEBUG    Repository update time: Wed Jul 12 13:02:42 2023
16:00:36-616647 DEBUG    Previous setup time: Wed Jul 12 15:40:57 2023
16:00:36-618647 INFO     Enabled extensions-builtin: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora', 'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding', 'sd-extension-aesthetic-scorer',
                         'sd-extension-steps-animation', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sd-webui-model-converter', 'seed_travel', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg', 'SwinIR']
16:00:36-632643 INFO     Enabled extensions: ['sd-dynamic-prompts', 'stable-diffusion-webui-Prompt_Generator']
16:00:36-636640 DEBUG    Latest extensions time: Wed Jul 12 15:40:16 2023
16:00:36-638639 DEBUG    Timestamps: version:1689156162 setup:1689165657 extension:1689165616
16:00:36-640639 INFO     No changes detected: Quick launch active
16:00:36-687624 INFO     Extension preload: 0.0s D:\Programs\automatic\extensions-builtin
16:00:36-690623 INFO     Extension preload: 0.0s D:\Programs\automatic\extensions
16:00:36-734610 DEBUG    Memory used: 0.33 total: 15.94 Collected 0
16:00:36-737608 DEBUG    Starting module: <module 'webui' from 'd:\\Programs\\automatic\\webui.py'>
16:00:36-739607 INFO     Server arguments: ['--medvram', '--disable-queue', '--debug']
16:00:36-759601 DEBUG    Loading Torch
16:01:01-559776 DEBUG    Loading Gradio
16:01:06-278605 DEBUG    Loading Modules
16:01:10-503793 INFO     Pipeline: Backend.ORIGINAL
No module 'xformers'. Proceeding without it.
16:01:15-510198 DEBUG    Loaded styles: styles.csv 0
16:01:19-553679 DEBUG    Enumerated samplers: 22
16:01:19-825669 INFO     Libraries loaded
16:01:19-828852 DEBUG    Entering start sequence
16:01:19-973806 DEBUG    Version: {'app': 'sd.next', 'updated': '2023-07-12', 'hash': 'c3a4293f', 'url': 'https://github.com/vladmandic/automatic.git/tree/master'}
16:01:19-977804 INFO     Using data path: D:\Programs\automatic
16:01:19-979803 DEBUG    Event loop: <_WindowsSelectorEventLoop running=False closed=False debug=False>
16:01:19-982802 DEBUG    Entering initialize
16:01:19-987801 INFO     Available VAEs: D:\Programs\automatic\models\VAE 1
16:01:20-586883 INFO     Available models: D:\Programs\automatic\models\Stable-diffusion 26
16:01:21-116100 DEBUG    Loading scripts
16:01:27-153256 INFO     ControlNet v1.1.232
ControlNet v1.1.232
ControlNet preprocessor location: D:\Programs\automatic\extensions-builtin\sd-webui-controlnet\annotator\downloads
16:01:27-664755 INFO     ControlNet v1.1.232
ControlNet v1.1.232
16:01:32-192591 DEBUG    Scripts load: ['automatic:0.118s', 'a1111-sd-webui-lycoris:1.849s', 'clip-interrogator-ext:0.441s', 'LDSR:0.109s', 'Lora:0.503s', 'multidiffusion-upscaler-for-automatic1111:0.061s',
                         'sd-dynamic-thresholding:0.091s', 'sd-extension-aesthetic-scorer:0.12s', 'sd-extension-system-info:0.131s', 'sd-webui-agent-scheduler:1.838s', 'sd-webui-controlnet:1.291s', 'sd-webui-model-converter:0.091s',
                         'seed_travel:0.242s', 'stable-diffusion-webui-images-browser:0.449s', 'stable-diffusion-webui-rembg:3.188s', 'SwinIR:0.102s', 'ScuNET:0.098s', 'sd-dynamic-prompts:0.248s',
                         'stable-diffusion-webui-Prompt_Generator:0.082s']
Scripts load: ['automatic:0.118s', 'a1111-sd-webui-lycoris:1.849s', 'clip-interrogator-ext:0.441s', 'LDSR:0.109s', 'Lora:0.503s', 'multidiffusion-upscaler-for-automatic1111:0.061s', 'sd-dynamic-thresholding:0.091s', 'sd-extension-aesthetic-scorer:0.12s', 'sd-extension-system-info:0.131s', 'sd-webui-agent-scheduler:1.838s', 'sd-webui-controlnet:1.291s', 'sd-webui-model-converter:0.091s', 'seed_travel:0.242s', 'stable-diffusion-webui-images-browser:0.449s', 'stable-diffusion-webui-rembg:3.188s', 'SwinIR:0.102s', 'ScuNET:0.098s', 'sd-dynamic-prompts:0.248s', 'stable-diffusion-webui-Prompt_Generator:0.082s']
16:01:32-583283 INFO     Loading UI theme: name=gradio/default style=Auto
16:01:32-613658 DEBUG    Creating UI
16:01:32-793860 DEBUG    Extra networks: checkpoints items=26 subdirs=0
16:01:32-808855 DEBUG    Extra networks: lora items=9 subdirs=0
16:01:33-446400 DEBUG    Script: 0.13s ui_tabs D:\Programs\automatic\extensions-builtin\clip-interrogator-ext\scripts\clip_interrogator_ext.py
16:01:38-847484 DEBUG    Script: 5.32s ui_tabs D:\Programs\automatic\extensions-builtin\stable-diffusion-webui-images-browser\scripts\image_browser.py
16:01:38-913470 DEBUG    Extensions list loaded: D:\Programs\automatic\html\extensions.json
16:01:41-012788 INFO     Server queues disabled
Running on local URL:  http://127.0.0.1:7860
16:01:41-253788 INFO     Local URL: http://127.0.0.1:7860/
16:01:41-257080 DEBUG    Gradio registered functions: 1765
16:01:41-259080 INFO     Initializing middleware
16:01:41-265078 DEBUG    Creating API
16:01:41-561089 INFO     [AgentScheduler] Task queue is empty
16:01:41-563977 INFO     [AgentScheduler] Registering APIs
16:01:41-583970 DEBUG    Script: 0.19s app_started D:\Programs\automatic\extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py
16:01:41-744917 DEBUG    Scripts setup: ['Tiled Diffusion:0.039s', 'ControlNet:0.023s', 'Dynamic Prompts v2.12.6:0.067s', 'X/Y/Z grid:0.008s', 'Seed travel:0.007s', 'Alternative:0.014s']
16:01:41-748155 DEBUG    Scripts components: []
16:01:41-750157 DEBUG    Model metadata: D:\Programs\automatic\metadata.json no changes
16:01:42-024068 DEBUG    gc: collected=9208 device=cuda {'ram': {'used': 1.03, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:01:42-030888 DEBUG    Select checkpoint: model icbinpICantBelieveIts_afterburn.safetensors [65cd001daf]
16:01:42-301162 DEBUG    gc: collected=248 device=cuda {'ram': {'used': 1.03, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:01:42-304165 DEBUG    Load model weights: existing=False target=D:\Programs\automatic\models\Stable-diffusion\icbinpICantBelieveIts_afterburn.safetensors info=None
16:01:42-578269 DEBUG    gc: collected=248 device=cuda {'ram': {'used': 1.03, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
Loading weights: D:\Programs\automatic\models\Stable-diffusion\icbinpICantBelieveIts_afterburn.safetensors ---------------------------------------- 2.1/2.1 GB 0:00:00
16:01:44-009642 DEBUG    Load model: name=D:\Programs\automatic\models\Stable-diffusion\icbinpICantBelieveIts_afterburn.safetensors dict=True
16:01:44-012641 DEBUG    Verifying Torch settings
16:01:44-014640 DEBUG    Desired Torch parameters: dtype=FP32 no-half=False no-half-vae=False upscast=True
16:01:44-016639 INFO     Setting Torch parameters: dtype=torch.float32 vae=torch.float32 unet=torch.float32
16:01:44-019639 DEBUG    Torch default device: cuda
16:01:44-021638 DEBUG    Model dict loaded: {'ram': {'used': 3.02, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:01:44-046630 DEBUG    Model config loaded: {'ram': {'used': 3.02, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
16:01:45-318245 DEBUG    Model created from config: D:\Programs\automatic\configs\v1-inference.yaml
16:01:45-321242 DEBUG    Model weights loading: {'ram': {'used': 3.98, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:01:49-043031 DEBUG    Model weights loaded: {'ram': {'used': 6.79, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:01:49-063025 DEBUG    Model weights moved: {'ram': {'used': 6.79, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:01:49-077020 INFO     Applying scaled dot product cross attention optimization
16:01:49-315683 INFO     Embeddings: loaded=1 skipped=0
16:01:49-328677 INFO     Model loaded in 6.7s (load=1.4s create=1.3s apply=0.8s vae=2.9s embeddings=0.2s)
16:01:49-609586 DEBUG    gc: collected=275 device=cuda {'ram': {'used': 6.79, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:01:49-615206 INFO     Model load finished: {'ram': {'used': 6.79, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0} cached=0
16:01:50-040176 DEBUG    gc: collected=124 device=cuda {'ram': {'used': 4.81, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:01:50-046176 INFO     Startup time: 73.3s (torch=24.8s gradio=4.7s libraries=13.5s vae=0.2s models=0.6s codeformer=0.4s gfpgan=0.2s scripts=11.1s upscalers=0.1s onchange=0.3s ui-txt2img=0.3s ui-img2img=0.1s ui-settings=0.1s
                         ui-extensions=7.6s ui-defaults=0.1s launch=0.2s app-started=0.5s checkpoint=8.3s)
16:02:00-185896 DEBUG    Server alive=True Requests=15 memory used: 4.81 total: 15.94
16:02:44-494791 DEBUG    gc: collected=1680 device=cuda {'ram': {'used': 4.81, 'total': 15.94}, 'gpu': {'used': 0.91, 'total': 6.0}, 'retries': 0, 'oom': 0}
16:02:44-499788 DEBUG    txt2img: id_task=task(nzjbu57e1tsjyqz)|prompt=1girl|negative_prompt=ugly, obese, render, rendered, doll, blurry, jpeg artifacts, distorted, cropped, low quality, deformed, b&w, grayscale, signature, lowres, bad
                         anatomy,
                         Asian|prompt_styles=[]|steps=20|sampler_index=10|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=7|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=
                         0|seed_enable_extras=False|height=512|width=512|enable_hr=False|denoising_strength=0.7|hr_scale=2|hr_upscaler=Latent|hr_second_pass_steps=0|hr_resize_x=0|hr_resize_y=0|override_settings_texts=[]args=(0, False,
                         'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '',
                         '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0,
                         False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 960, 64, True, True,
                         True, False, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, 'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate', 'animation', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit
                         object at 0x000001652F181480>, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'positive', 'comma', 0,
                         False, False, '', 7, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, 50, False, 4.0, '', 10.0, 'Linear', 3, False, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0,
                         0, 0.001, 75, 0.0, False, True)
16:02:44-624797 DEBUG    Script process: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s', 'Dynamic Prompts v2.12.6:0.07s']
16:02:44-627796 DEBUG    Script before-process-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s', 'Dynamic Prompts
                         v2.12.6:0.0s']
16:02:44-632795 DEBUG    Script process-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s', 'Dynamic Prompts v2.12.6:0.0s']
Initializing ----------------------------------------   0% -:--:-- 0:00:02
16:02:50-635061 ERROR    Exception: Input type (struct c10::Half) and bias type (float) should be the same
16:02:50-638060 ERROR    Arguments: args=('task(nzjbu57e1tsjyqz)', '1girl', 'ugly, obese, render, rendered, doll, blurry, jpeg artifacts, distorted, cropped, low quality, deformed, b&w, grayscale, signature, lowres, bad anatomy,
                         Asian', [], 20, 10, False, False, 1, 1, 7, 1, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False,
                         10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background',
                         0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4,
                         0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 960, 64, True, True, True, False, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, 'x264',
                         'blend', 10, 0, 0, False, True, True, True, 'intermediate', 'animation', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001652F181480>, True, False, 1, False, False, False, 1.1, 1.5,
                         100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'positive', 'comma', 0, False, False, '', 7, '', [], 0, '', [], 0, '', [], True, False, False, False,
                         0, False, None, None, False, 50, False, 4.0, '', 10.0, 'Linear', 3, False, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 0.001, 75, 0.0, False, True) kwargs={}
16:02:50-686045 ERROR    gradio call: RuntimeError
┌───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────┐
│ D:\Programs\automatic\modules\call_queue.py:34 in f                                                                                                                                                  │
│                                                                                                                                                                                                      │
│    33 │   │   │   try:                                                                                                                                                                               │
│ >  34 │   │   │   │   res = func(*args, **kwargs)                                                                                                                                                    │
│    35 │   │   │   │   progress.record_results(id_task, res)                                                                                                                                          │
│                                                                                                                                                                                                      │
│ D:\Programs\automatic\modules\txt2img.py:56 in txt2img                                                                                                                                               │
│                                                                                                                                                                                                      │
│   55 │   if processed is None:                                                                                                                                                                       │
│ > 56 │   │   processed = processing.process_images(p)                                                                                                                                                │
│   57 │   p.close()                                                                                                                                                                                   │
│                                                                                                                                                                                                      │
│                                                                                       ... 30 frames hidden ...                                                                                       │
│                                                                                                                                                                                                      │
│ d:\Programs\automatic\venv\lib\site-packages\torch\nn\modules\conv.py:463 in forward                                                                                                                 │
│                                                                                                                                                                                                      │
│    462 │   def forward(self, input: Tensor) -> Tensor:                                                                                                                                               │
│ >  463 │   │   return self._conv_forward(input, self.weight, self.bias)                                                                                                                              │
│    464                                                                                                                                                                                               │
│                                                                                                                                                                                                      │
│ d:\Programs\automatic\venv\lib\site-packages\torch\nn\modules\conv.py:459 in _conv_forward                                                                                                           │
│                                                                                                                                                                                                      │
│    458 │   │   │   │   │   │   │   _pair(0), self.dilation, self.groups)                                                                                                                             │
│ >  459 │   │   return F.conv2d(input, weight, bias, self.stride,                                                                                                                                     │
│    460 │   │   │   │   │   │   self.padding, self.dilation, self.groups)                                                                                                                             │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
RuntimeError: Input type (struct c10::Half) and bias type (float) should be the same
16:02:51-231390 DEBUG    gc: collected=175 device=cuda {'ram': {'used': 1.95, 'total': 15.94}, 'gpu': {'used': 4.21, 'total': 6.0}, 'retries': 0, 'oom': 0}```

### Version Platform Description

Win10 22H2 Firefox 115.0.2
Version: c3a4293f Wed Jul 12 13:02:42 2023 +0300

### Acknowledgements

- [X] I have read the above and searched for existing issues
sullenfish commented 1 year ago

This started with: https://github.com/vladmandic/automatic/commit/ec99bad021fbbd308e1f86fa17ec7e9b92260aa8

I haven't had a chance to really look into the backend switching or the changes to sd-webui-agent-scheduler to determine the root cause.

sullenfish commented 1 year ago

This is working fine for me with --backend diffusers. Switching back to the original backend reproduces the same issue.

@pinanew have you tried starting up with the backend flag set?

I will admit, I haven't yet taken the time to fully understand what the backend switching is all about.

pinanew commented 1 year ago

Updated to 558b71f0884dee33edc74d9fd03cc0f04055a6b4, selected backend diffusers...

14:10:32-981512 ERROR    Exception: 'Options' object has no attribute 'schedulers_prediction_type'
14:10:32-984511 ERROR    Arguments: args=('task(bccb85k4x4mmmad)', '1girl', 'ugly, obese, render, rendered, doll, blurry, jpeg artifacts, distorted, cropped, low quality, deformed, b&w, grayscale, signature, lowres, bad
                         anatomy, Asian, cum,  easynegative', [], 22, 0, False, False, 1, 1, 5.5, 1, -1.0, -1.0, 0, 0, 0, False, 584, 512, True, 0.55, 2.1, 'Latent (antialiased)', 17, 0, 0, 5, 0.5, '', '', [], 0, False,
                         'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate', 'animation', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0,
                         'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'positive', 'comma', 0, False, False, '', 7, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, False, 4.0, '', 10.0,
                         'Linear', 3, False, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 0.001, 75, 0.0, False, True) kwargs={}
14:10:33-010511 ERROR    gradio call: AttributeError
┌───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────┐
│ D:\Programs\automatic\modules\call_queue.py:34 in f                                                                                                                                                  │
│                                                                                                                                                                                                      │
│    33 │   │   │   try:                                                                                                                                                                               │
│ >  34 │   │   │   │   res = func(*args, **kwargs)                                                                                                                                                    │
│    35 │   │   │   │   progress.record_results(id_task, res)                                                                                                                                          │
│                                                                                                                                                                                                      │
│ D:\Programs\automatic\modules\txt2img.py:60 in txt2img                                                                                                                                               │
│                                                                                                                                                                                                      │
│   59 │   if processed is None:                                                                                                                                                                       │
│ > 60 │   │   processed = processing.process_images(p)                                                                                                                                                │
│   61 │   p.close()                                                                                                                                                                                   │
│                                                                                                                                                                                                      │
│                                                                                       ... 4 frames hidden ...                                                                                        │
│                                                                                                                                                                                                      │
│ D:\Programs\automatic\modules\sd_samplers_diffusers.py:64 in __init__                                                                                                                                │
│                                                                                                                                                                                                      │
│   63 │   │   │   │   self.config[key] = value                                                                                                                                                        │
│ > 64 │   │   if opts.schedulers_prediction_type != 'default':                                                                                                                                        │
│   65 │   │   │   self.config['prediction_type'] = opts.schedulers_prediction_type                                                                                                                    │
│                                                                                                                                                                                                      │
│ D:\Programs\automatic\modules\shared.py:641 in __getattr__                                                                                                                                           │
│                                                                                                                                                                                                      │
│   640 │   │   │   return self.data_labels[item].default                                                                                                                                              │
│ > 641 │   │   return super(Options, self).__getattribute__(item) # pylint: disable=super-with-                                                                                                       │
│   642                                                                                                                                                                                                │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
AttributeError: 'Options' object has no attribute 'schedulers_prediction_type'

And I don't want new backend, seems by tests it consume more VRAM.

sullenfish commented 1 year ago

I finally took the time to do some analysis, and for me, the breaking change is: https://github.com/vladmandic/automatic/blob/e002652ac6c5cbff11d56b2495a428d7db863b0b/modules/sd_models.py#L398

Uncommenting the line assigns torch.float16 to devices.dtype_unet, which sidesteps the input/bias type mismatch later down the line.

There's clearly a lot going on in load_model_weights, so I'd like to understand it a little bit better before attempting a PR.

But, in the meantime, uncommenting that line may provide a quick fix.

RuralRob commented 1 year ago

I finally took the time to do some analysis, and for me, the breaking change is:

https://github.com/vladmandic/automatic/blob/e002652ac6c5cbff11d56b2495a428d7db863b0b/modules/sd_models.py#L398

Uncommenting the line assigns torch.float16 to devices.dtype_unet, which sidesteps the input/bias type mismatch later down the line.

There's clearly a lot going on in load_model_weights, so I'd like to understand it a little bit better before attempting a PR.

But, in the meantime, uncommenting that line may provide a quick fix.

Thank you! This UI has been broken on my Mac for the past couple of weeks, but uncommenting this line fixed it.

vladmandic commented 1 year ago

that change is for a reason, there is no pr to revert the change. before that, desired dtype was overwritten by whatever unet had internally. which means whatever you set in settings was ignored and overwritten. now you can set anything you want. so set fp32 or fp16 or anything you need and that is going to be applied.

but in your case, it seems you have fp32 already set, but you're trying to apply it on model that internally is not fp32. in that case, unet may need to be upcast - you can try that and if that works, that would be a valid pr.

RuralRob commented 1 year ago

So, if I have a mix of fp16 and fp32 models, and I switch from one to another, is the expectation then that I also have to go into Settings and select the proper fp size to match? Seems like there should be a "use whatever matches the model" option.

(Or is there one already, and I'm completely missing it...)

vladmandic commented 1 year ago

expectation is that autocast works and torch takes care of mixed modes - and it does on most gpus. some are bad and i don't have every possible gpu to test.

vladmandic commented 1 year ago

i've added option Use fixed UNet precision in compute settings.

RuralRob commented 1 year ago

Awesome, thanks!

Love your UI, it's the best of all the "Automatic" variants I've tried so far.