vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.73k stars 426 forks source link

[Issue]: LoRAModule.forward() got an unexpected keyword argument 'scale' #2247

Closed mysticfall closed 1 year ago

mysticfall commented 1 year ago

Issue Description

Image generation fails with the following trace:

00:54:57-794076 ERROR    Exception: LoRAModule.forward() got an unexpected keyword argument 'scale'                                                                                         
00:54:57-796998 ERROR    Arguments: args=('task(pzscyl84egjeera)', 'cat', '', [], 30, 2, 2, True, False, False, 2, 2, 6, 6, 0.7, 1, -1.0, -1.0, 0, 0, 0, 1024, 1024, True, 0.5, 2, 'None',  
                         False, 20, 0, 0, 5, 0.8, '', '', [], 0, False, False, 'positive', 'comma', 0, False, False, '', 0, '', [], 0, '', [], 0, '', [], True, False, False, False, 0,     
                         False) kwargs={}
...
│ /workspace/automatic/modules/call_queue.py:34 in f                                                                                                                                       │
│                                                                                                                                                                                          │
│   33 │   │   │   try:                                                                                                                                                                    │
│ ❱ 34 │   │   │   │   res = func(*args, **kwargs)                                                                                                                                         │
│   35 │   │   │   │   progress.record_results(id_task, res)                                                                                                                               │
│                                                                                                                                                                                          │
│ /workspace/automatic/modules/txt2img.py:66 in txt2img                                                                                                                                    │
│                                                                                                                                                                                          │
│   65 │   if processed is None:                                                                                                                                                           │
│ ❱ 66 │   │   processed = processing.process_images(p)                                                                                                                                    │
│   67 │   p.close()                                                                                                                                                                       │
│                                                                                                                                                                                          │
│ /workspace/automatic/modules/processing.py:626 in process_images                                                                                                                         │
│                                                                                                                                                                                          │
│    625 │   │   else:                                                                                                                                                                     │
│ ❱  626 │   │   │   res = process_images_inner(p)                                                                                                                                         │
│    627 │   finally:                                                                                                                                                                      │
│                                                                                                                                                                                          │
│ /workspace/automatic/modules/processing.py:785 in process_images_inner                                                                                                                   │
│                                                                                                                                                                                          │
│    784 │   │   │   │   from modules.processing_diffusers import process_diffusers                                                                                                        │
│ ❱  785 │   │   │   │   x_samples_ddim = process_diffusers(p, p.seeds, p.prompts, p.negative_pro                                                                                          │
│    786                                                                                                                                                                                   │
│                                                                                                                                                                                          │
│ /workspace/automatic/modules/processing_diffusers.py:358 in process_diffusers                                                                                                            │
│                                                                                                                                                                                          │
│   357 │   try:                                                                                                                                                                           │
│ ❱ 358 │   │   output = shared.sd_model(**base_args) # pylint: disable=not-callable                                                                                                       │
│   359 │   except AssertionError as e:                                                                                                                                                    │
│                                                                                                                                                                                          │
│                                                                                 ... 4 frames hidden ...                                                                                  │
│                                                                                                                                                                                          │
│ /workspace/automatic/venv/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in _call_impl                                                                                     │
│                                                                                                                                                                                          │
│   1500 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                                                                                           │
│ ❱ 1501 │   │   │   return forward_call(*args, **kwargs)                                                                                                                                  │
│   1502 │   │   # Do not call functions when jit is used                                                                                                                                  │
│                                                                                                                                                                                          │
│ /workspace/automatic/venv/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py:1086 in forward                                                                                │
│                                                                                                                                                                                          │
│   1085 │   │   │   │   hidden_states = resnet(hidden_states, temb, scale=lora_scale)                                                                                                     │
│ ❱ 1086 │   │   │   │   hidden_states = attn(                                                                                                                                             │
│   1087 │   │   │   │   │   hidden_states,                                                                                                                                                │
│                                                                                                                                                                                          │
│ /workspace/automatic/venv/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in _call_impl                                                                                     │
│                                                                                                                                                                                          │
│   1500 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                                                                                           │
│ ❱ 1501 │   │   │   return forward_call(*args, **kwargs)                                                                                                                                  │
│   1502 │   │   # Do not call functions when jit is used                                                                                                                                  │
│                                                                                                                                                                                          │
│ /workspace/automatic/venv/lib/python3.10/site-packages/diffusers/models/transformer_2d.py:293 in forward                                                                                 │
│                                                                                                                                                                                          │
│   292 │   │   │   │   hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height                                                                                            │
│ ❱ 293 │   │   │   │   hidden_states = self.proj_in(hidden_states, scale=lora_scale)                                                                                                      │
│   294                                                                                                                                                                                    │
│                                                                                                                                                                                          │
│ /workspace/automatic/venv/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in _call_impl                                                                                     │
│                                                                                                                                                                                          │
│   1500 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                                                                                           │
│ ❱ 1501 │   │   │   return forward_call(*args, **kwargs)                                                                                                                                  │
│   1502 │   │   # Do not call functions when jit is used                                                                                                                                  │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: LoRAModule.forward() got an unexpected keyword argument 'scale'

Version Platform Description

Freshly installed dev branch on Linux (Runpod):

00:44:41-748381 INFO     Python 3.10.6 on Linux                                                                                                                                             
00:44:41-809469 INFO     Version: app=sd.next updated=2023-09-23 hash=940b4122 url=https://github.com/vladmandic/automatic.git/tree/dev                                                     
00:44:42-336257 INFO     Latest published version: 89ba8e3cf6f4e7697ffe0298b49bed7ea108b039 2023-09-20T12:39:56Z                                                                            
00:44:42-342012 INFO     Platform: arch=x86_64 cpu=x86_64 system=Linux release=5.15.0-82-generic python=3.10.6 

Relevant log output

No response

Backend

Diffusers

Model

SD-XL

Acknowledgements

vladmandic commented 1 year ago

You've upgraded diffusers to unsupported version. Use version specified in requirements.

mysticfall commented 1 year ago

You've upgraded diffusers to unsupported version. Use version specified in requirements.

I'm glad to learn how to fix the problem but I feel confused. As I mentioned in my report, it was a freshly installed instance on a new RunPod instance.

I used a barebone template (runpod/pytorch) to create a new instance. And I nuked (i.e. rm -Rf automatic) the old installation on my network volume then just did git clone and ./webui.sh --share.

Maybe there can be a version conflict in the current requirements.txt?

mysticfall commented 1 year ago

I rechecked my instance to see if I have globally installed diffusers package somewhere but found none. And the log seems to confirm that the loaded module was indeed from SD.Next's virtual environment (i.e. automatic/venv/lib/python3.10/site-packages/diffusers) which is version 0.21.2.

I looked at requirements.txt and it lists diffusers==0.21.2 as a dependency. So I don't see any mismatched version problem in my setup. Have I missed something obvious?

vladmandic commented 1 year ago

diffusers==0.21.2 were in main branch for less than a day and then reverted because of multiple issues. they are still in dev branch, but if you're using dev branch you should not be reporting issues here. main branch is using this: https://github.com/vladmandic/automatic/blob/89ba8e3cf6f4e7697ffe0298b49bed7ea108b039/requirements.txt#L49

UPDATE: i just checked your log and you're using dev branch. so yes, that's expected. for any issues with dev, reach out on discord first. otherwise, use master branch.

mysticfall commented 1 year ago

UPDATE: i just checked your log and you're using dev branch. so yes, that's expected. for any issues with dev, reach out on discord first. otherwise, use master branch.

I have no problem with having issues while using the dev branch. I didn't know the issue tracker was only for the master branch.

I'll check out the Discord server later. Thanks!

vladmandic commented 1 year ago

dev branch is generally unstable and used mostly by testers that communicate on discord. i don't mind others using dev branch, but i really cannot support it on gihub directly.