oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
38.21k stars 5.07k forks source link

Automatic1111 / Stable Diffusion 1.9.x API change breaks compatibility #5993

Closed nktice closed 17 hours ago

nktice commented 1 month ago

Describe the bug

Automatic1111 / Stable Diffusion 1.9.x series changed their API - removing functions called by current versions of Oobabooga causing 404 errors...

Release info : https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases

Here is a request for them to change their system to allow old calls - https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15603

Here's a workaround folks can use until this issue resolves - this uses their 1.8.0 version -

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git checkout bef51ae
git reset --hard

Is there an existing issue for this?

Reproduction

Here's my install guide with full install instructions for those who want it... https://github.com/nktice/AMD-AI New Automatic1111 / Stable Diffusion installs and works... but when it comes time for the usual calls form Oobabooga's TGW it spits out errors...

Screenshot

No response

Logs

Output generated in 13.73 seconds (6.41 tokens/s, 88 tokens, context 3059, seed 2050555011)
Output generated in 7.48 seconds (11.76 tokens/s, 88 tokens, context 3059, seed 1973713489)
Prompting the image generator via the API on http://127.0.0.1:7860...
Output generated in 7.44 seconds (11.83 tokens/s, 88 tokens, context 3059, seed 480162400)
Prompting the image generator via the API on http://127.0.0.1:7860...
Traceback (most recent call last):
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/gradio/queueing.py", line 566, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/gradio/route_utils.py", line 261, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/gradio/blocks.py", line 1786, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/gradio/blocks.py", line 1350, in call_function
    prediction = await utils.async_iteration(iterator)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/gradio/utils.py", line 583, in async_iteration
    return await iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/gradio/utils.py", line 576, in __anext__
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/gradio/utils.py", line 559, in run_sync_iterator_async
    return next(iterator)
           ^^^^^^^^^^^^^^
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/gradio/utils.py", line 742, in gen_wrapper
    response = next(iterator)
               ^^^^^^^^^^^^^^
  File "/home/n/text-generation-webui/modules/chat.py", line 414, in generate_chat_reply_wrapper
    for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True, for_ui=True)):
  File "/home/n/text-generation-webui/modules/chat.py", line 382, in generate_chat_reply
    for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message, for_ui=for_ui):
  File "/home/n/text-generation-webui/modules/chat.py", line 350, in chatbot_wrapper
    output['visible'][-1][1] = apply_extensions('output', output['visible'][-1][1], state, is_chat=True)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/text-generation-webui/modules/extensions.py", line 231, in apply_extensions
    return EXTENSION_MAP[typ](*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/text-generation-webui/modules/extensions.py", line 89, in _apply_string_extensions
    text = func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/text-generation-webui/extensions/sd_api_pictures/script.py", line 220, in output_modifier
    string = get_SD_pictures(string, state['character_menu']) + "\n" + text
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/n/text-generation-webui/extensions/sd_api_pictures/script.py", line 158, in get_SD_pictures
    response.raise_for_status()
  File "/home/n/miniconda3/envs/textgen/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://127.0.0.1:7860/sdapi/v1/txt2img

System Info

AMD 5950x with dual AMD Radeon 7900 XTX GPUs...  Running Ubuntu 23.10
Sythelux commented 1 month ago

They just split sampler_name and scheduler now. So you basically need to set "sampler_name": "DPM++ 2M" in the config and omit the scheduler Karras for now, this will alter the images slightly in theory as it is using whatever scheduler is set to default.

oobabooga commented 6 days ago

Thanks for the info @Sythelux