Open CCpt5 opened 4 weeks ago
btw, we have another one funny extension that can work with u when you using LLM generate image forever. https://github.com/xlinx/sd-webui-decadetw-auto-messaging-realtime
One other thing, I think there may be a typo on the setup page for this line, "API-ModelName: LMStudio can be empty is fine select in LMStudio App; ollama should set like: ollama3.1 (cmd:ollama list)." The example there says "ollama should set like: ollama3.1" - but I think that should say "llama3.1". I wanted to mention that because if it is a typo, it could lead to confusion regarding what to put in the model field below. I used "llama3.1" as I have that model and that worked fine for the LLM text portion.
Fixed. thx a lot.
did u feel the sd-image-result are different(ex: more never thought of detail showup) then before after using LLM to prompt? and plz share ur sys-prompt or model that i will add into README.
TypeError: save_pil_to_file() got an unexpected keyword argument 'name'
after install forge I see the error too, It's look like gradio error. in sd-web-ui folder requirements_versions.txt has gradio version
automatic111-web-ui use gradio==3.41.2 forge-web-ui use gradio==4.41.0
i have try 4.41.1 still not work. version 4 support webcam now, maybe u can try use cam to input ur image.... I feel strange which is 3.x to 4.x is big update. if it's a bug should be missing. https://www.gradio.app/changelog https://github.com/OpenTalker/SadTalker/issues/430
I'm not at my PC, but yea it does seem to be a compatibility issue w gradio 4 ( I attached a chatgpt review of the error).
Perhaps it's something they need to work out. I really appreciate you taking a look and confirming it's not just my settings.
cool, You ask chatgpt this question? haha
If u have time, please share how use and did u feel different result between llm and self?
i had the same error. somehow it works again. sorry for bothering you
Thank you for your efforts on this project! I'm excited to get it running properly.
The LLM text generation seems to work fine, but when I try to use the vision tab I get the below error. Once this occurs the text tab stops working as well and errors with the same error showing up in the console (until resetting Forge)
I know Forge is going through a ton of code reworks right now so if this is due to that, or user error, please forgive me. I wanted to report this error in the event it is a bug that can be tweaked.
One other thing, I think there may be a typo on the setup page for this line, "API-ModelName: LMStudio can be empty is fine select in LMStudio App; ollama should set like: ollama3.1 (cmd:ollama list)." The example there says "ollama should set like: ollama3.1" - but I think that should say "llama3.1". I wanted to mention that because if it is a typo, it could lead to confusion regarding what to put in the model field below. I used "llama3.1" as I have that model and that worked fine for the LLM text portion.
Thanks for any insight!!
Error Log
System Info
[sysinfo-2024-08-18-20-05.json](https://github.com/user-attachments/files/16651629/sysinfo-2024-08-18-20-05.json)