oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
38.21k stars 5.07k forks source link

"Recv failure: Connection reset by peer" when trying to run fresh oobabooga installation. #5711

Closed Blabter closed 1 month ago

Blabter commented 3 months ago

Describe the bug

I get curl: (56) Recv failure: Connection reset by peer error when trying to connect to webui.

If I'm trying to connect from the inside of docker container webui responds normally.

Is there an existing issue for this?

Reproduction

  1. Clone repository.
  2. ln -s docker/{nvidia/Dockerfile,nvidia/docker-compose.yml,.dockerignore} .
  3. cp docker/.env.example .env
  4. mkdir -p logs cache
  5. docker compose up --build

Screenshot

No response

Logs

text-generation-webui_1  | 22:56:28-270101 INFO     Starting Text generation web UI                        
text-generation-webui_1  | 22:56:28-273468 INFO     Loading the extension "gallery"                        
text-generation-webui_1  | 
text-generation-webui_1  | Running on local URL:  http://127.0.0.1:7860
text-generation-webui_1  |

System Info

OS: Ubuntu 20.04
oldmanjk commented 3 months ago

Same

INFO:httpx:HTTP Request: POST http://192.168.1.79:5000/v1/chat/completions "HTTP/1.1 200 OK"
Traceback (most recent call last):

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 113, in __iter__
    for part in self._httpcore_stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 367, in __iter__
    raise exc from None

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 363, in __iter__
    for part in self._stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 349, in __iter__
    raise exc

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 341, in __iter__
    for chunk in self._connection._receive_response_body(**kwargs):

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 210, in _receive_response_body
    event = self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 220, in _receive_event
    with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):

  File "/home/j/miniconda3/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc

httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

  File "/home/j/miniconda3/bin/gpte", line 8, in <module>
    sys.exit(app())
             ^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/applications/cli/main.py", line 191, in main
    files_dict = agent.improve(files_dict, prompt)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/applications/cli/cli_agent.py", line 132, in improve
    files_dict = self.improve_fn(
                 ^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/default/steps.py", line 172, in improve
    messages = ai.next(messages, step_name=curr_fn())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/ai.py", line 118, in next
    response = self.backoff_inference(messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/backoff/_sync.py", line 105, in retry
    ret = target(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/ai.py", line 162, in backoff_inference
    return self.llm.invoke(messages)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 173, in invoke
    self.generate_prompt(

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 571, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 434, in generate
    raise e

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 424, in generate
    self._generate_with_cache(

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 608, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 455, in _generate
    return generate_from_stream(stream_iter)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 62, in generate_from_stream
    for chunk in stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 419, in _stream
    for chunk in self.client.create(messages=message_dicts, **params):

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 46, in __iter__
    for item in self._iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 61, in __stream__
    for sse in iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 53, in _iter_events
    yield from self._decoder.iter(self.response.iter_lines())

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 287, in iter
    for line in iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 861, in iter_lines
    for text in self.iter_text():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 848, in iter_text
    for byte_content in self.iter_bytes():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 829, in iter_bytes
    for raw_bytes in self.iter_raw():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 883, in iter_raw
    for raw_stream_bytes in self.stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_client.py", line 126, in __iter__
    for chunk in self._stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 112, in __iter__
    with map_httpcore_exceptions():

  File "/home/j/miniconda3/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc

httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
zeroward commented 3 months ago

Digging into this, seems to not be using any of the command args specified in the .env file and uses the args specified in the CMD_FLAGS.txt file. Throw your flags into that and make sure it's in the current working directory.

oldmanjk commented 3 months ago

I have no idea what you're talking about. It's an obvious regression and needs to be rolled back. People need to start testing things before they push. This project has become so unreliable, I'm searching for an alternative

GenUbu commented 3 months ago

Digging into this, seems to not be using any of the command args specified in the .env file and uses the args specified in the CMD_FLAGS.txt file. Throw your flags into that and make sure it's in the current working directory.

Thank you. Adding --listen to the CMD_FLAGS.txt solved the issue for me.

zeroward commented 3 months ago

I have no idea what you're talking about. It's an obvious regression and needs to be rolled back. People need to start testing things before they push. This project has become so unreliable, I'm searching for an alternative

When you're running via docker, there's nothing in the CMD line of the docker file to pull in the command flags from your .env file. However, the Dockerfile does ingest the CMD_FLAGS.txt file located in the top level of the repo. Just annotate your flags in there and you're golden.

github-actions[bot] commented 1 month ago

This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.