lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.68k stars 175 forks source link

[Bug]: Clip token limit bypass broken with onnx #329

Open OllieRG525 opened 7 months ago

OllieRG525 commented 7 months ago

Is there an existing issue for this?

What happened?

I have been trying to generate prompts with over 75 characters, however an error appears in the console and anything past 75 tokens in my prompt has no influence over the image. Everything else works normal, and the token counters show --/-- not the 75 limit. BREAK also does nothing in the prompt.

Steps to reproduce the problem

Try to generate an image with any model with a prompt over 75 tokens

What should have happened?

Automatic1111 should be able to split the prompt into multiple chunks and feed them into clip separately to avoid the 75 token limit.

Sysinfo

sysinfo.txt

What browsers do you use to access the UI ?

Brave, Microsoft Edge

Console logs

(automatic1111_olive) C:\Users\#####\stable-diffusion-webui-directml>webui.bat --onnx --backend directml
venv "C:\Users\#####\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Version: 1.6.1
Commit hash: 03eec1791be011e087985ae93c1f66315d5a250e
Installing onnxruntime
Installing onnxruntime-directml
Installing Olive
Launching Web UI with arguments: --onnx --backend directml
no module 'xformers'. Processing without...
No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 1.13.1+cpu. You might want to consider upgrading.
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.13.1+cpu.
The program is tested to work with torch 2.0.0.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Model stable-diffusion-v1-5 loaded.
Applying attention optimization: InvokeAI... done.
C:\Users\Ollie\stable-diffusion-webui-directml\modules\ui.py:1665: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row().style(equal_height=False):
C:\Users\Ollie\stable-diffusion-webui-directml\modules\ui.py:1787: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row().style(equal_height=False):
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 7.3s (prepare environment: 3.0s, import torch: 2.0s, import gradio: 1.0s, setup paths: 1.3s, initialize shared: 0.8s, other imports: 0.4s, load scripts: 0.8s, create ui: 0.6s, gradio launch: 0.1s).
1001
1001
1001

The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['extra tokens to cause issue,']
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 29/29 [00:11<00:00,  2.54it/s]

Additional information

No response

Gonzalo1987 commented 6 months ago

Same problem here :( 77 tokens max

Cybernatus commented 4 months ago

I'm unfortunately facing the same issue, but with no solution so far.

Epyon01P commented 4 months ago

Can confirm. 77 tokens max and BREAK or AND has no effect.

ElSchulzoML commented 3 months ago

If you don't mind, I would like to ref https://github.com/ssube/onnx-web, hopefully providing some orientation on this topic.