xlinx / sd-webui-decadetw-auto-prompt-llm

sd-webui-auto-prompt-llm
MIT License
39 stars 4 forks source link

[Forge] - Save_pil_to_file() got an unexpected keyword argument 'name' #7

Open CCpt5 opened 4 weeks ago

CCpt5 commented 4 weeks ago

Thank you for your efforts on this project! I'm excited to get it running properly.

The LLM text generation seems to work fine, but when I try to use the vision tab I get the below error. Once this occurs the text tab stops working as well and errors with the same error showing up in the console (until resetting Forge)

I know Forge is going through a ton of code reworks right now so if this is due to that, or user error, please forgive me. I wanted to report this error in the event it is a bug that can be tweaked.

One other thing, I think there may be a typo on the setup page for this line, "API-ModelName: LMStudio can be empty is fine select in LMStudio App; ollama should set like: ollama3.1 (cmd:ollama list)." The example there says "ollama should set like: ollama3.1" - but I think that should say "llama3.1". I wanted to mention that because if it is a typo, it could lead to confusion regarding what to put in the model field below. I used "llama3.1" as I have that model and that worked fine for the LLM text portion.

Thanks for any insight!!

Error Log

Traceback (most recent call last):
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\queueing.py", line 536, in process_events
    response = await route_utils.call_process_api(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1919, in process_api
    inputs = await self.preprocess_data(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1650, in preprocess_data
    processed_input.append(block.preprocess(inputs_cached))
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\components\image.py", line 197, in preprocess
    return image_utils.format_image(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\image_utils.py", line 30, in format_image
    path = processing_utils.save_pil_to_cache(
TypeError: save_pil_to_file() got an unexpected keyword argument 'name'
System Info

[sysinfo-2024-08-18-20-05.json](https://github.com/user-attachments/files/16651629/sysinfo-2024-08-18-20-05.json)

xlinx commented 3 weeks ago
  1. Okay, let me install forge to test it. By the way FLUX is hot recently.I will test vision on forge.
  2. Okie, r u mean ollama and llama are differnrt right?

btw, we have another one funny extension that can work with u when you using LLM generate image forever. https://github.com/xlinx/sd-webui-decadetw-auto-messaging-realtime

xlinx commented 3 weeks ago

One other thing, I think there may be a typo on the setup page for this line, "API-ModelName: LMStudio can be empty is fine select in LMStudio App; ollama should set like: ollama3.1 (cmd:ollama list)." The example there says "ollama should set like: ollama3.1" - but I think that should say "llama3.1". I wanted to mention that because if it is a typo, it could lead to confusion regarding what to put in the model field below. I used "llama3.1" as I have that model and that worked fine for the LLM text portion.

Fixed. thx a lot.

did u feel the sd-image-result are different(ex: more never thought of detail showup) then before after using LLM to prompt? and plz share ur sys-prompt or model that i will add into README.

TypeError: save_pil_to_file() got an unexpected keyword argument 'name'

after install forge I see the error too, It's look like gradio error. in sd-web-ui folder requirements_versions.txt has gradio version

automatic111-web-ui use gradio==3.41.2 forge-web-ui use gradio==4.41.0

i have try 4.41.1 still not work. version 4 support webcam now, maybe u can try use cam to input ur image.... I feel strange which is 3.x to 4.x is big update. if it's a bug should be missing. https://www.gradio.app/changelog https://github.com/OpenTalker/SadTalker/issues/430

螢幕擷取畫面 2024-08-20 164445

CCpt5 commented 3 weeks ago

I'm not at my PC, but yea it does seem to be a compatibility issue w gradio 4 ( I attached a chatgpt review of the error).

Perhaps it's something they need to work out. I really appreciate you taking a look and confirming it's not just my settings. Screenshot_20240820-131014

xlinx commented 3 weeks ago

cool, You ask chatgpt this question? haha

If u have time, please share how use and did u feel different result between llm and self?

tazztone commented 2 weeks ago

i had the same error. somehow it works again. sorry for bothering you