vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.36k stars 382 forks source link

[Issue]: App starts on M2 Mac/OSX, but throws error immediately after Generate - fresh install #1928

Closed gymdreams8 closed 11 months ago

gymdreams8 commented 12 months ago

Issue Description

I have not been able to run this on OSX with the latest update. I don’t know which version broke the install, because I haven’t pulled an update since June. I asked on Discord for help and after showing my debug logs, was asked to create an issue here.

Python 3.10.9 on Darwin
Version: 5f5a564d Thu Aug 3 22:38:43 2023 +0300

Steps to reproduce:

Because the log is quite long, I have put it in a gist:

https://gist.github.com/gymdreams8/c12c88b5f608f886f0d8d08005b07612

Workarond:

Right now, in order for me to continue to run, I have checked out an earlier version. The only reason I even found a version that worked for me is because I recalled that on bootup it always showed me that diffusers are not in the latest version and would fetch on boot up.

So I searched for the last version that has that from the repo:

git grep 'diffusers==0.17.1' $(git rev-list --all)

Found the commit hash:

c90e9965c7b6b4d90bb3d63e3c58352309228e5c:requirements.txt:diffusers==0.17.1

https://github.com/vladmandic/automatic/commit/c90e9965c7b6b4d90bb3d63e3c58352309228e5c

And just used this version for now. That’s quite some commits in the past, but perhaps this would at least help you narrow down a possible issue.

If you let me know which versions contain fairly serious breaking changes, I can test a list of commits and let you know. I just don’t want to go through every one of the commits myself blindly.

Thanks very much! If you wish to chat with me directly, my username on Discord is gymdreams without discriminator (new style username)

Version Platform Description

Python 3.10.9 on Darwin
Version: 5f5a564d Thu Aug 3 22:38:43 2023 +0300

Relevant log output

$ ./webui.sh --debug
Create and activate python venv
Launching launch.py...
08:03:28-829864 INFO     Starting SD.Next
08:03:28-832498 INFO     Python 3.10.9 on Darwin
08:03:28-845336 INFO     Version: 5f5a564d Thu Aug 3 22:38:43 2023 +0300
08:03:29-302646 DEBUG    Setting environment tuning
08:03:29-304135 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False
08:03:29-305060 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True
08:03:29-356838 WARNING  Modified files: ['config.json.back']
08:03:29-372737 DEBUG    Repository update time: Fri Aug  4 03:38:43 2023
08:03:29-373922 DEBUG    Previous setup time: Fri Aug  4 05:57:11 2023
08:03:29-374476 INFO     Disabled extensions: []
08:03:29-374981 INFO     Enabled extensions-builtin: ['SwinIR', 'clip-interrogator-ext', 'sd-dynamic-thresholding', 'sd-webui-controlnet', 'ScuNET', 'stable-diffusion-webui-rembg', 'sd-webui-agent-scheduler', 'Lora',
                         'sd-extension-system-info', 'stable-diffusion-webui-images-browser', 'LDSR', 'multidiffusion-upscaler-for-automatic1111', 'a1111-sd-webui-lycoris']
08:03:29-376927 INFO     Enabled extensions: []
08:03:29-377381 DEBUG    Latest extensions time: Fri Aug  4 05:57:06 2023
08:03:29-377842 DEBUG    Timestamps: version:1691091523 setup:1691099831 extension:1691099826
08:03:29-378619 INFO     No changes detected: Quick launch active
08:03:29-379155 INFO     Verifying requirements
08:03:29-387856 INFO     Disabled extensions: []
08:03:29-388393 INFO     Enabled extensions-builtin: ['SwinIR', 'clip-interrogator-ext', 'sd-dynamic-thresholding', 'sd-webui-controlnet', 'ScuNET', 'stable-diffusion-webui-rembg', 'sd-webui-agent-scheduler', 'Lora',
                         'sd-extension-system-info', 'stable-diffusion-webui-images-browser', 'LDSR', 'multidiffusion-upscaler-for-automatic1111', 'a1111-sd-webui-lycoris']
08:03:29-389804 INFO     Enabled extensions: []
08:03:29-392123 INFO     Extension preload: 0.0s /Users/gd/dev/automatic2/extensions-builtin
08:03:29-392734 INFO     Extension preload: 0.0s /Users/gd/dev/automatic2/extensions
08:03:29-398848 DEBUG    Memory used: 0.04 total: 96.0 Collected 0
08:03:29-399522 DEBUG    Starting module: <module 'webui' from '/Users/gd/dev/automatic2/webui.py'>
08:03:29-400022 INFO     Server arguments: ['--debug']
08:03:29-415741 DEBUG    Loading Torch
08:03:30-783762 DEBUG    Loading Gradio
08:03:31-308343 DEBUG    Loading Modules
No module 'xformers'. Proceeding without it.
08:03:32-045151 DEBUG    Reading: /Users/gd/dev/automatic2/config.json len=235
08:03:32-046342 INFO     Pipeline: Backend.ORIGINAL
08:03:32-047280 DEBUG    Loaded styles: /Users/gd/dev/automatic2/styles.csv 0
08:03:32-500786 DEBUG    Samplers enumerated: ['UniPC', 'DDIM', 'PLMS', 'Euler a', 'Euler', 'DPM++ 2S a', 'DPM++ 2S a Karras', 'DPM++ 2M', 'DPM++ 2M Karras', 'DPM++ SDE', 'DPM++ SDE Karras', 'DPM++ 2M SDE', 'DPM++ 2M SDE Karras', 'DPM
                         fast', 'DPM adaptive', 'DPM2', 'DPM2 Karras', 'DPM2 a', 'DPM2 a Karras', 'LMS', 'LMS Karras', 'Heun']
08:03:32-514550 INFO     Libraries loaded
08:03:32-515314 DEBUG    Entering start sequence
08:03:32-554799 DEBUG    Version: {'app': 'sd.next', 'updated': '2023-08-03', 'hash': '5f5a564d', 'url': 'https://github.com/vladmandic/automatic/tree/master'}
08:03:32-556684 INFO     Using data path: /Users/gd/dev/automatic2
08:03:32-557490 DEBUG    Event loop: <_UnixSelectorEventLoop running=False closed=False debug=False>
08:03:32-558086 DEBUG    Entering initialize
08:03:32-559140 INFO     Available VAEs: /Users/gd/dev/automatic2/models/VAE 0
08:03:32-560455 DEBUG    Reading: /Users/gd/dev/automatic2/cache.json len=1
08:03:32-561248 DEBUG    Reading: /Users/gd/dev/automatic2/metadata.json len=1
08:03:32-561724 INFO     Available models: /Users/gd/dev/automatic2/models/Stable-diffusion 1
08:03:32-588795 DEBUG    Loading scripts
08:03:33-373809 INFO     ControlNet v1.1.234
ControlNet v1.1.234
ControlNet preprocessor location: /Users/gd/dev/automatic2/extensions-builtin/sd-webui-controlnet/annotator/downloads
08:03:33-432472 INFO     ControlNet v1.1.234
ControlNet v1.1.234
08:03:33-858399 DEBUG    Scripts load: ['a1111-sd-webui-lycoris:0.339s', 'Lora:0.086s', 'sd-webui-agent-scheduler:0.192s', 'sd-webui-controlnet:0.139s', 'stable-diffusion-webui-rembg:0.351s']
Scripts load: ['a1111-sd-webui-lycoris:0.339s', 'Lora:0.086s', 'sd-webui-agent-scheduler:0.192s', 'sd-webui-controlnet:0.139s', 'stable-diffusion-webui-rembg:0.351s']
08:03:33-914570 INFO     Loading UI theme: name=black-orange style=Auto
08:03:33-916404 DEBUG    Creating UI
08:03:33-919926 DEBUG    Reading: /Users/gd/dev/automatic2/ui-config.json len=1315
08:03:33-932150 DEBUG    Extra networks: checkpoints items=1 subdirs=0
08:03:33-951372 DEBUG    UI interface: tab=txt2img batch=False seed=False advanced=False second_pass=False
08:03:34-003820 DEBUG    UI interface: tab=img2img seed=False resize=False batch=False denoise=True advanced=False
08:03:34-045085 DEBUG    Reading: /Users/gd/dev/automatic2/ui-config.json len=1315
08:03:34-381916 DEBUG    Script: 0.23s ui_tabs /Users/gd/dev/automatic2/extensions-builtin/stable-diffusion-webui-images-browser/scripts/image_browser.py
08:03:34-383408 DEBUG    Extensions list failed to load: /Users/gd/dev/automatic2/html/extensions.json
08:03:34-697777 DEBUG    Extension list refresh: processed=13 installed=13 enabled=13 disabled=0 visible=13 hidden=0
Running on local URL:  http://127.0.0.1:7861
08:03:34-843890 INFO     Local URL: http://127.0.0.1:7861/
08:03:34-844582 DEBUG    Gradio registered functions: 1700
08:03:34-844987 INFO     Initializing middleware
08:03:34-847248 DEBUG    Creating API
08:03:34-921901 INFO     [AgentScheduler] Task queue is empty
08:03:34-922596 INFO     [AgentScheduler] Registering APIs
08:03:34-988938 DEBUG    Scripts setup: ['Tiled Diffusion:0.014s', 'ControlNet:0.008s', 'Alternative:0.007s']
08:03:34-990917 DEBUG    Scripts components: []
08:03:34-991351 DEBUG    Model metadata: /Users/gd/dev/automatic2/metadata.json no changes
08:03:34-992200 WARNING  Selected checkpoint not found: model.ckpt
08:03:34-992856 DEBUG    Select checkpoint: v1-5-pruned-emaonly.safetensors [6ce0161689]
08:03:34-993391 DEBUG    Load model weights: existing=False target=/Users/gd/dev/automatic2/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors info=None
08:03:35-144950 DEBUG    gc: collected=10273 device=mps {'ram': {'used': 0.78, 'total': 96.0}}
Loading weights: /Users/gd/dev/automatic2/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/4.3 GB -:--:--
08:03:35-376537 DEBUG    Load model: name=/Users/gd/dev/automatic2/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors dict=True
08:03:35-377170 DEBUG    Verifying Torch settings
08:03:35-377588 DEBUG    Desired Torch parameters: dtype=FP32 no-half=False no-half-vae=False upscast=True
08:03:35-378114 INFO     Setting Torch parameters: dtype=torch.float32 vae=torch.float32 unet=torch.float32
08:03:35-378612 DEBUG    Torch default device: mps
08:03:35-379152 DEBUG    Model dict loaded: {'ram': {'used': 0.79, 'total': 96.0}}
08:03:35-387187 DEBUG    Model config loaded: {'ram': {'used': 0.79, 'total': 96.0}}
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
08:03:35-887804 DEBUG    Model created from config: /Users/gd/dev/automatic2/configs/v1-inference.yaml
08:03:35-888710 DEBUG    Model weights loading: {'ram': {'used': 1.76, 'total': 96.0}}
08:03:36-882777 DEBUG    Model weights loaded: {'ram': {'used': 9.49, 'total': 96.0}}
08:03:37-671133 DEBUG    Model weights moved: {'ram': {'used': 6.41, 'total': 96.0}}
08:03:37-677045 INFO     Applying Doggettx cross attention optimization
08:03:37-685908 INFO     Embeddings: loaded=0 skipped=0
08:03:37-689823 INFO     Model loaded in 2.5s (load=0.2s create=0.5s apply=0.5s vae=0.5s move=0.8s)
08:03:37-839299 DEBUG    gc: collected=24 device=mps {'ram': {'used': 6.41, 'total': 96.0}}
08:03:37-840317 INFO     Model load finished: {'ram': {'used': 6.41, 'total': 96.0}} cached=0
08:03:37-997936 DEBUG    gc: collected=0 device=mps {'ram': {'used': 2.44, 'total': 96.0}}
08:03:37-998927 INFO     Startup time: 8.6s (torch=1.4s gradio=0.5s libraries=1.2s scripts=1.3s onchange=0.1s ui-extensions=0.6s launch=0.1s app-started=0.1s checkpoint=3.0s)
08:04:00-086926 DEBUG    Server alive=True Requests=2 memory used: 2.44 total: 96.0
08:05:59-525943 DEBUG    Server alive=True Requests=2 memory used: 2.44 total: 96.0
08:07:59-951350 DEBUG    Server alive=True Requests=2 memory used: 2.44 total: 96.0
08:08:14-447905 DEBUG    txt2img:
                         id_task=task(tv6wg1brqrgdw1s)|prompt=something|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=None|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|
                         subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.7|hr_scale=2|hr_upscaler=Latent|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_
                         y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_start=0.8||refiner_prompt=|refiner_negative=|override_settings_texts=[]args=(0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None',
                         2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '',
                         'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False,
                         0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 512, 64, True, True, True, False, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True,
                         'MEAN', 'AD', 1, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x2cb4cbbb0>, False, False, 'positive', 'comma', 0, False, False, '', 0, '', [], 0, '', [], 0, '', [], True, False, False,
                         False, 0, False, None, None, False, 50)
08:08:14-462926 DEBUG    Script process: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale Fix):0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
08:08:14-509781 DEBUG    Script before-process-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale Fix):0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
08:08:14-510685 DEBUG    Script process-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale Fix):0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
08:08:16-367096 DEBUG    Sampler: UniPC {}
Initializing ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% -:--:-- 0:00:00
08:08:17-176375 ERROR    Exception: Input type (c10::Half) and bias type (float) should be the same
08:08:17-177542 ERROR    Arguments: args=('task(tv6wg1brqrgdw1s)', 'something', '', [], 20, 0, None, False, False, 1, 1, 6, 6, 0.7, 1, -1.0, -1.0, 0, 0, 0, 512, 512, False, 0.7, 2, 'Latent', 20, 0, 0, 0.8, '', '', [], 0, False,
                         'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '',
                         'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False,
                         0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 512, 64, True, True, True, False,
                         False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x2cb4cbbb0>, False, False, 'positive', 'comma', 0, False, False, '', 0,
                         '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, 50) kwargs={}
08:08:17-181747 ERROR    gradio call: RuntimeError
╭───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────╮
│ /Users/gd/dev/automatic2/modules/call_queue.py:34 in f                                                                                                                                              │
│                                                                                                                                                                                                      │
│    33 │   │   │   try:                                                                                                                                                                               │
│ ❱  34 │   │   │   │   res = func(*args, **kwargs)                                                                                                                                                    │
│    35 │   │   │   │   progress.record_results(id_task, res)                                                                                                                                          │
│                                                                                                                                                                                                      │
│ /Users/gd/dev/automatic2/modules/txt2img.py:64 in txt2img                                                                                                                                           │
│                                                                                                                                                                                                      │
│   63 │   if processed is None:                                                                                                                                                                       │
│ ❱ 64 │   │   processed = processing.process_images(p)                                                                                                                                                │
│   65 │   p.close()                                                                                                                                                                                   │
│                                                                                                                                                                                                      │
│                                                                                       ... 30 frames hidden ...                                                                                       │
│                                                                                                                                                                                                      │
│ /Users/gd/dev/automatic2/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py:463 in forward                                                                                                  │
│                                                                                                                                                                                                      │
│    462 │   def forward(self, input: Tensor) -> Tensor:                                                                                                                                               │
│ ❱  463 │   │   return self._conv_forward(input, self.weight, self.bias)                                                                                                                              │
│    464                                                                                                                                                                                               │
│                                                                                                                                                                                                      │
│ /Users/gd/dev/automatic2/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py:459 in _conv_forward                                                                                            │
│                                                                                                                                                                                                      │
│    458 │   │   │   │   │   │   │   _pair(0), self.dilation, self.groups)                                                                                                                             │
│ ❱  459 │   │   return F.conv2d(input, weight, bias, self.stride,                                                                                                                                     │
│    460 │   │   │   │   │   │   self.padding, self.dilation, self.groups)                                                                                                                             │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Input type (c10::Half) and bias type (float) should be the same

Acknowledgements

vladmandic commented 12 months ago

the problem is that i don't have m2 platform available for testing, so i can't reproduce the issue and really move forward - and community work on m2 has been slow at best. and this kind of a problem really requires a lot of tracing/debugging, even if the fix at the end may be just a single line.

i'd love to support m1/m2 better, but my hands are tied at the moment.

gymdreams8 commented 12 months ago

Understood — please let me know if there is anything that I can help with. This Github profile I’m using to write is semi-anon but I’m a Python developer and if you can point me to the potential issue, I can try to debug it myself.

– except the challenge for me is that I don’t use tensorflow / possibly would take me a while to read through this library. and of course I don’t actually know the inner workings of the stable diffusion tech (ha)

BUT what I can possibly do is to see if I get the same problem from the non-forked version of Automatic1111 (which I haven’t tried yet). But if it doesn’t, would it potentially be something that would help you solve the issue?

vladmandic commented 12 months ago

changes between sdnext and original are too great at by now, checking that would not help. first thing would be to get a much deeper traceback, search for max_frames and extra_lines and triple those values and see what pops out.

fyi, root cause is that some part of the model is clearly running in fp32 and part is running in fp16. normally that is not a problem as autocast automatically adjusts, but autocast is broken on m1/m2 in torch itself, so all parts must be manually aligned.

alternatively, you can force everything to fp32 and it will work. if we cannot get to root cause, i might just do that (e.g. if platform is m1/m2, i can force fp32), but that comes at performance cost.

gymdreams8 commented 12 months ago

Sounds good. I will see if I can look into it. I have never read this source so I can’t promise anything. Also somewhat busy at work so no promises that I would be able to look into it as timely as you respond to my issue.

Also, thanks for the suggestion for the workaround.

sukualam commented 11 months ago

i have this bug too, but on original webui version, it run fine on my macos

gymdreams8 commented 11 months ago

@sukualam interesting… I just cloned the original a1111 version and I don’t have any issue. The only thing that I had to do was to add --no-half to the A1111 user startup otherwise it throws errors.

./websui-user.sh

export COMMANDLINE_ARGS="--no-half"

And then I also added optimization by installing my own version of pytorch, using the helpful instruction from comfyUI (i don’t think that this is necessary but in my experience it does seem to speed things up on M1/M2 (I am on M2Max):

Quoting verbatim here: https://github.com/comfyanonymous/ComfyUI

You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version

Install pytorch nightly. For instructions, read the Accelerated PyTorch training on Mac Apple Developer guide (make sure to install the latest pytorch nightly).

I use pyenv so I installed mine through pip:

pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu

I distinctly recalled that this step was necessary for running automatic1111 a long time ago, and that it solved a lot of issues once I do, though I don’t know if that‘s still required or not.

vladmandic commented 11 months ago

@gymdreams8 thats EXACTLY what i said earlier except in sd.next you set it in settings and in a1111 you use cmd line flag --no-half.

gymdreams8 commented 11 months ago

thats EXACTLY what i said earlier except in sd.next you set it in settings and in a1111 you use cmd line flag --no-half.

@vladmandic hey sorry I still haven’t found time to go through your suggestion but may I ask which part in your earlier message is corresponding to the --no-half settings? I don’t however think that my error had anything to do with that. Does it?

I am unfamiliar with any of these source codes as I have never looked into them. It doesn’t help that I haven’t even worked with pytorch before so it would take some time for me to see what max_frames and extra_lines are ever about.

I promise i will get to it!

vladmandic commented 11 months ago

may I ask which part in your earlier message is corresponding to the --no-half settings

--no-half doesn't exist as cmd flag in sd.next, it was moved to settings months ago and in settings its called "Use full precision for model (--no-half)"

which is what i was referring to when i earlier said

alternatively, you can force everything to fp32 and it will work

fotoetienne commented 11 months ago

I was seeing this same issue on an M1 MBP. It was resolved by installing pytorch nightly (as mentioned by @gymdreams8 )

$ uname -mprsv
Darwin 21.6.0 Darwin Kernel Version 21.6.0: Mon Aug 22 20:19:52 PDT 2022; root:xnu-8020.140.49~2/RELEASE_ARM64_T6000 arm64 arm

$ pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
Collecting torch
  Downloading https://download.pytorch.org/whl/nightly/cpu/torch-2.1.0.dev20230811-cp39-none-macosx_11_0_arm64.whl 
Collecting torchvision
  Downloading https://download.pytorch.org/whl/nightly/cpu/torchvision-0.16.0.dev20230811-cp39-cp39-macosx_11_0_arm64.whl (1.6 MB)
Collecting torchaudio
  Downloading https://download.pytorch.org/whl/nightly/cpu/torchaudio-2.1.0.dev20230811-cp39-cp39-macosx_11_0_arm64.whl (1.8 MB)
Collecting sympy
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting fsspec
  Using cached fsspec-2023.6.0-py3-none-any.whl (163 kB)
Collecting jinja2
  Downloading https://download.pytorch.org/whl/nightly/Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting networkx
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting typing-extensions
  Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting filelock
  Using cached filelock-3.12.2-py3-none-any.whl (10 kB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached Pillow-10.0.0-cp39-cp39-macosx_11_0_arm64.whl (3.1 MB)
Collecting requests
  Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting numpy
  Using cached numpy-1.25.2-cp39-cp39-macosx_11_0_arm64.whl (14.0 MB)
Collecting MarkupSafe>=2.0
  Using cached MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl (17 kB)
Collecting urllib3<3,>=1.21.1
  Using cached urllib3-2.0.4-py3-none-any.whl (123 kB)
Collecting certifi>=2017.4.17
  Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Collecting idna<4,>=2.5
  Downloading https://download.pytorch.org/whl/nightly/idna-3.4-py3-none-any.whl (61 kB)
Collecting charset-normalizer<4,>=2
  Using cached charset_normalizer-3.2.0-cp39-cp39-macosx_11_0_arm64.whl (124 kB)
Collecting mpmath>=0.19
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)

I also needed to set Use full precision for model (--no-half)

MajorGruberth commented 11 months ago

Did both: Full Precision switch and PyTorch nightly and still cannot generate anything: "Input type (c10::Half) and bias type (float) should be the same". Never had that runtime error issue with Vlad/automatic nor with automatic 1111.

Mac M1

gymdreams8 commented 11 months ago

@MajorGruberth

Did both: Full Precision switch and PyTorch nightly and still cannot generate anything: "Input type (c10::Half) and bias type (float) should be the same". Never had that runtime error issue with Vlad/automatic nor with automatic 1111.

If you followed the convo above, you’ll see that the Full Precision switch and PyTorch nightly will ONLY fix the Automatic1111 issue. It won’t fix the vlad issue. The Vlad issue is separate, which is what this issue is about.

But as Vlad mentioned, his fork now contains too many other things that are too different from Automatic1111 so what fixed A1111 will not fix the Vlad issue.

gymdreams8 commented 11 months ago

@fotoetienne which issue did it fix? Automatic1111 or Vlad? Because that solution only fixes Automatic1111 main branch, and does not fix Vlad’s fork. I have for now just use A1111 because I need to be able to continue to run my tasks, but for this Vlad issue, I don’t think that it is the same issue, since that issue started from some updates happening after June, and my environment already had the those nightly applied.

MajorGruberth commented 11 months ago

Both issues got fixed overnight, I wonder how... Today auto 1111 was running smoothly as well as Next

gymdreams8 commented 11 months ago

@MajorGruberth Well, the pytorch nightly will make everything faster because it’s optimized. But how does that fix Next??? Is it from the latest? I haven’t done a pull since reporting this issue…

vladmandic commented 11 months ago

SDnext is tested with torch nightly, I upgrade it every few weeks (not really every day, but close enough)

uxtechie commented 11 months ago

Hi!

It works when I disabled the use of fp16 like this:

sd_models.py, line 458

def repair_config(sd_config):
    if "use_ema" not in sd_config.model.params:
        sd_config.model.params.use_ema = False
    if shared.opts.no_half:
        sd_config.model.params.unet_config.params.use_fp16 = False
    elif shared.opts.upcast_sampling:
        sd_config.model.params.unet_config.params.use_fp16 = False # CHANGED
    if getattr(sd_config.model.params.first_stage_config.params.ddconfig, "attn_type", None) == "vanilla-xformers" and not shared.xformers_available:
        sd_config.model.params.first_stage_config.params.ddconfig.attn_type = "vanilla"

This emulates the behavior of the –no-half parameter.

vladmandic commented 11 months ago

but that changes upcast sampling so its same as no-half - what's the point of that, can't you just use no-half instead?

gymdreams8 commented 11 months ago

I don’t know what you have done with this, but I have just verified that everything is working again on a fresh install and new virtualenv. I wrote an article with all my steps:

https://docs.gymdreams8.com/mac_sdnext.html

The main thing is that I did install Pytorch Nightly for Apple Silicon before running SD.Next for the first time, but otherwise everything is working out of the box without throwing an error.

I don’t know if you also install Pytorch Nightly when SD.Next first starts, and if it does, I’ll just remove the instructions, though I don’t see if it would hurt for me to install anyway.

The steps are based on the latest commit, which is https://github.com/vladmandic/automatic/commit/81129cc4b7e451701d6d5ed2127424e8f4ac6685

Since this resolves my issue, should I close it? It’s not an issue for me anymore, but I see that there are active discussions here, so I’m not sure. Let me know.

vladmandic commented 11 months ago

i made a change that i hoped would help - glad it did. re: torch. sdnext does not install nightly build, it installes latest released. but if you install torch yourself, it will detect it and try to use it, not force reinstall - so you installing nightly build first in this case is a good thing. (btw, thanks for link to your install docs)

i'll close the issue since majority of the remaining thread is a lot of noise - if issue persists for some other users, lets start clean.

uxtechie commented 11 months ago

but that changes upcast sampling so its same as no-half - what's the point of that, can't you just use no-half instead?

Translation to English:

I tried the --no-half parameter from the CLI but it didn't work, I forced it in the code to see if it could help. I see that it's already managed and works without needing the parameter, thanks!! :)

gymdreams8 commented 11 months ago

@vladmandic Btw, I know that this is no longer an issue, but since recently I had to install ComfyUI for SDXL (to use a very extensive workflow that’s insanely good (SeargeSDXL), I saw this part of the doc that you might find interesting:

Launch ComfyUI by running python main.py --force-fp16. Note that --force-fp16 will only work if you installed the latest pytorch nightly.

https://github.com/comfyanonymous/ComfyUI

You talked about fp16 / fp32 above, and it would seem to imply that for Macs, this is only possible if Pytorch Nightly is installed.