lks-ai / anynode

A Node for ComfyUI that does what you ask it to do
MIT License
421 stars 27 forks source link

Error calling chat RTX as localhost #25

Open manassm opened 3 weeks ago

manassm commented 3 weeks ago

I was trying to use chat RTX as localhost, and I got the following errors: 스크린샷 2024-06-07 185d642 스크린샷 2024-06-07 dd 스크린샷 2024-06-07 185523 스크린샷 2024-06-07 185453 each time I run it, the two error messages on top alternates.

Thank you for your support.

lks-ai commented 3 weeks ago

I am reading the docs over at Chat RTX and it doesn't seem that it is compatible with the OpenAI standard chat completions endpoints.

You might need to use another LLM server like ollama, vLLM, etc. or find a way to get the openai style chat endpoints for RTX

rexorp commented 3 weeks ago

I am getting a similar error pattern using Local LLM with Ollama and I also tried with LM Studio. I can see that the servers are chatting but there is always an error regarding something not defined whether it be 'maths' 'sin' etc. there is always an excuse.

I was looking over the Ollama server logs and noticed this:

level=WARN source=server.go:230 msg="multimodal models don't support parallel requests yet"

lks-ai commented 3 weeks ago

what are your prompts?