zou-group / textgrad

TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
http://textgrad.com/
MIT License
1.5k stars 117 forks source link

Connection error when using lm-studio #78

Closed HelloWorldLTY closed 1 week ago

HelloWorldLTY commented 1 month ago

Hi, I tried to use local LLM with lm-studio but it returned me as connection error. My sample code is modified from

https://github.com/zou-group/textgrad/blob/main/textgrad/engine/local_model_openai_api.py

First, it seems that the example code does not work, because prompt is not an argument, but content, for ChatExternalClient.

For the code:

        from openai import OpenAI
        from textgrad.engine.local_model_openai_api import ChatExternalClient

        client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
        engine = ChatExternalClient(client=client, model_string="your-model-name")
        print(engine.generate(max_tokens=40, prompt="What is the meaning of life?"))

I met such an error:

st_to, options, remaining_retries, stream, stream_cls)
   1011     log.debug("Raising connection error")
-> 1012     raise APIConnectionError(request=request) from err
   1014 log.debug(
   1015     'HTTP Response: %s %s "%i %s" %s',
   1016     request.method,
   (...)
   1020     response.headers,
   1021 )

APIConnectionError: Connection error.

Does this mean I cannot create a client from local machine? Thanks.

kgourgou commented 1 month ago

Hi,

Looks to me like there is an issue with your local server configuration?!

Can you check that

response = client.completions.create(
  prompt="Write a tagline for an ice cream shop."
)

returns a response?

HelloWorldLTY commented 1 month ago

Hi, there will be an error if I try:

  File "<stdin>", line 1, in <module>
  File "/home/tl688/.conda/envs/evo/lib/python3.11/site-packages/openai/_utils/_utils.py", line 276, in wrapper
    raise TypeError(msg)
TypeError: Missing required arguments; Expected either ('model' and 'prompt') or ('model', 'prompt' and 'stream') arguments to be given

I think if I do not config client with its LLM, I cannot run such method. May I try to include a model in the client.completions sentence?

kgourgou commented 1 month ago

oh yeah, apologies, you can of course add the model parameter. We just want to see whether the local server replies to requests.

HelloWorldLTY commented 1 month ago

The error is the same:

File ~/.conda/envs/evo/lib/python3.11/site-packages/openai/_base_client.py:1012, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
   1002         return self._retry_request(
   1003             input_options,
   1004             cast_to,
   (...)
   1008             response_headers=None,
   1009         )
   1011     log.debug("Raising connection error")
-> 1012     raise APIConnectionError(request=request) from err
   1014 log.debug(
   1015     'HTTP Response: %s %s "%i %s" %s',
   1016     request.method,
   (...)
   1020     response.headers,
   1021 )
   1022 log.debug("request_id: %s", response.headers.get("x-request-id"))

APIConnectionError: Connection error.
kgourgou commented 1 month ago

Interesting! When I start a local server with lm-studio, this is what I get:

Screenshot 2024-07-18 at 21 50 23

Could you check that the base_url and api_key in the instructions under chat (python) match those you used with the openai library? For example for me:

# Example: reuse your existing OpenAI setup
from openai import OpenAI

# Point to the local server
client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")

completion = client.chat.completions.create(
  model="mlabonne/NeuralBeagle14-7B-GGUF",
  messages=[
    {"role": "system", "content": "Always answer in rhymes."},
    {"role": "user", "content": "Introduce yourself."}
  ],
  temperature=0.7,
)

print(completion.choices[0].message)

Also the server logs should mention whether the server is operating as expected or there is a problem with the port, etc.

HelloWorldLTY commented 1 month ago

Hi, thanks for your quick answer! My system does not have graphic platform so I cannot access the website you shared. Do you have any idea about using lm studio in non-graphic platform? Thanks. I may also post this questions to people in lm-studio or localAI for their helo.

kgourgou commented 1 month ago

Do you have any idea about using lm studio in non-graphic platform?

Maybe you can use lms: https://lmstudio.ai/blog/lms

HelloWorldLTY commented 1 month ago

Thanks. I will have a try!