Closed x90slide closed 3 months ago
Hi! Thanks for your issue, we will deal with your issue as soon as possible.
@x90slide For me, there is an issue with this model "CohereForAI/c4ai-command-r-plus", try switching to another LLM, that did the trick for me.
I had to write my own library to get this to work. IDK why this one isnt working but mine does.
I had to write my own library to get this to work. IDK why this one isnt working but mine does.
what kind of changes you have made
I wrote my own from scratch. I don't have it with me now but it was pretty trivial so you could probably do it yourself. My guess the difference has something to do with API changes.
This issue was marked as stale because of inactivity.
This issue was closed because of inactivity.
Describe the bug No response when using chat() or query(), was working perfectly the other day which leads me to suspect new headers or validation has been implemented maybe? Perhaps a change to the api. New conversations are created, however, visiting the webpage on hugginchat for the conversations started with the api give error 500. Normal usage of gui gives normal response
`Traceback (most recent call last): File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\requests\models.py", line 820, in generate yield from self.raw.stream(chunk_size, decode_content=True) File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\urllib3\response.py", line 1040, in stream yield from self.read_chunked(amt, decode_content=decode_content) File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\urllib3\response.py", line 1184, in read_chunked self._update_chunk_length() File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\urllib3\response.py", line 1119, in _update_chunk_length raise ProtocolError("Response ended prematurely") from None urllib3.exceptions.ProtocolError: Response ended prematurely
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Users\admin\PycharmProjects\videoautomatation\gptscript.py", line 71, in
script = generatescript("CN tower")
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\PycharmProjects\videoautomatation\gptscript.py", line 46, in generatescript
print(output.wait_until_done())
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\hugchat\message.py", line 196, in wait_until_done
self.next()
File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\hugchat\message.py", line 151, in next
raise self.error
File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\hugchat\message.py", line 97, in next
a: dict = next(self.g)
^^^^^^^^^^^^
File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\hugchat\hugchat.py", line 708, in _stream_query
print(resp.text)
^^^^^^^^^
File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\requests\models.py", line 926, in text
if not self.content:
^^^^^^^^^^^^
File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\requests\models.py", line 902, in content
self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b""
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\PycharmProjects\videoautomatation.venv\Lib\site-packages\requests\models.py", line 822, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: Response ended prematurely`
To Reproduce use the library normally
Expected behavior a response from huggingchat llm
Additional context What Operating System are you using? windows What Python version are you using? (Found using
python3 --version
orpython --version
) python 3.6 What version of hugchat are you using? (Found usingpip3 show hugchat
orpip show hugchat
) latest 0.4.6