Closed pinanew closed 1 year ago
This started with: https://github.com/vladmandic/automatic/commit/ec99bad021fbbd308e1f86fa17ec7e9b92260aa8
I haven't had a chance to really look into the backend switching or the changes to sd-webui-agent-scheduler to determine the root cause.
This is working fine for me with --backend diffusers
. Switching back to the original backend reproduces the same issue.
@pinanew have you tried starting up with the backend flag set?
I will admit, I haven't yet taken the time to fully understand what the backend switching is all about.
Updated to 558b71f0884dee33edc74d9fd03cc0f04055a6b4, selected backend diffusers...
14:10:32-981512 ERROR Exception: 'Options' object has no attribute 'schedulers_prediction_type'
14:10:32-984511 ERROR Arguments: args=('task(bccb85k4x4mmmad)', '1girl', 'ugly, obese, render, rendered, doll, blurry, jpeg artifacts, distorted, cropped, low quality, deformed, b&w, grayscale, signature, lowres, bad
anatomy, Asian, cum, easynegative', [], 22, 0, False, False, 1, 1, 5.5, 1, -1.0, -1.0, 0, 0, 0, False, 584, 512, True, 0.55, 2.1, 'Latent (antialiased)', 17, 0, 0, 5, 0.5, '', '', [], 0, False,
'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate', 'animation', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0,
'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'positive', 'comma', 0, False, False, '', 7, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, False, 4.0, '', 10.0,
'Linear', 3, False, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 0.001, 75, 0.0, False, True) kwargs={}
14:10:33-010511 ERROR gradio call: AttributeError
┌───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────┐
│ D:\Programs\automatic\modules\call_queue.py:34 in f │
│ │
│ 33 │ │ │ try: │
│ > 34 │ │ │ │ res = func(*args, **kwargs) │
│ 35 │ │ │ │ progress.record_results(id_task, res) │
│ │
│ D:\Programs\automatic\modules\txt2img.py:60 in txt2img │
│ │
│ 59 │ if processed is None: │
│ > 60 │ │ processed = processing.process_images(p) │
│ 61 │ p.close() │
│ │
│ ... 4 frames hidden ... │
│ │
│ D:\Programs\automatic\modules\sd_samplers_diffusers.py:64 in __init__ │
│ │
│ 63 │ │ │ │ self.config[key] = value │
│ > 64 │ │ if opts.schedulers_prediction_type != 'default': │
│ 65 │ │ │ self.config['prediction_type'] = opts.schedulers_prediction_type │
│ │
│ D:\Programs\automatic\modules\shared.py:641 in __getattr__ │
│ │
│ 640 │ │ │ return self.data_labels[item].default │
│ > 641 │ │ return super(Options, self).__getattribute__(item) # pylint: disable=super-with- │
│ 642 │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
AttributeError: 'Options' object has no attribute 'schedulers_prediction_type'
And I don't want new backend, seems by tests it consume more VRAM.
I finally took the time to do some analysis, and for me, the breaking change is: https://github.com/vladmandic/automatic/blob/e002652ac6c5cbff11d56b2495a428d7db863b0b/modules/sd_models.py#L398
Uncommenting the line assigns torch.float16
to devices.dtype_unet
, which sidesteps the input/bias type mismatch later down the line.
There's clearly a lot going on in load_model_weights
, so I'd like to understand it a little bit better before attempting a PR.
But, in the meantime, uncommenting that line may provide a quick fix.
I finally took the time to do some analysis, and for me, the breaking change is:
Uncommenting the line assigns
torch.float16
todevices.dtype_unet
, which sidesteps the input/bias type mismatch later down the line.There's clearly a lot going on in
load_model_weights
, so I'd like to understand it a little bit better before attempting a PR.But, in the meantime, uncommenting that line may provide a quick fix.
Thank you! This UI has been broken on my Mac for the past couple of weeks, but uncommenting this line fixed it.
that change is for a reason, there is no pr to revert the change. before that, desired dtype was overwritten by whatever unet had internally. which means whatever you set in settings was ignored and overwritten. now you can set anything you want. so set fp32 or fp16 or anything you need and that is going to be applied.
but in your case, it seems you have fp32 already set, but you're trying to apply it on model that internally is not fp32. in that case, unet may need to be upcast - you can try that and if that works, that would be a valid pr.
So, if I have a mix of fp16 and fp32 models, and I switch from one to another, is the expectation then that I also have to go into Settings and select the proper fp size to match? Seems like there should be a "use whatever matches the model" option.
(Or is there one already, and I'm completely missing it...)
expectation is that autocast works and torch takes care of mixed modes - and it does on most gpus. some are bad and i don't have every possible gpu to test.
i've added option Use fixed UNet precision
in compute settings.
Awesome, thanks!
Love your UI, it's the best of all the "Automatic" variants I've tried so far.
Issue Description
Click on Generate button and instant error. Everything worked on 4152c2049bf94b7e2deae585586d8801198d160e. I think it's connected with my settings: