YSocialTwin / YClient

Y Social Client
GNU General Public License v3.0
7 stars 4 forks source link

Invalid 'max_tokens': integer below minimum value. Expected a value >= 1, but got -1 instead #6

Open sinking8 opened 1 week ago

sinking8 commented 1 week ago

I got this error when I ran the simulation tool

Traceback (most recent call last):
  File "C:\Users\ashwi\OneDrive\Desktop\y_social\YClient\y_client.py", line 122, in <module>
    experiment.run_simulation()
  File "C:\Users\ashwi\OneDrive\Desktop\y_social\YClient\y_client\clients\client_base.py", line 266, in run_simulation
    g.select_action(
  File "C:\Users\ashwi\OneDrive\Desktop\y_social\YClient\y_client\classes\agents.py", line 1144, in select_action
    u2.initiate_chat(
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 1018, in initiate_chat
    self.send(msg2send, recipient, silent=silent)
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 655, in send
    recipient.receive(message, self, request_reply, silent)
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 818, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 1972, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 1340, in generate_oai_reply
    extracted_response = self._generate_oai_reply_from_client(
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 1359, in _generate_oai_reply_from_client
    response = llm_client.create(
               ^^^^^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\autogen\oai\client.py", line 697, in create
    response = client.create(params)
               ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\autogen\oai\client.py", line 306, in create
    response = completions.create(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\openai\resources\chat\completions.py", line 829, in create
    return self._post(
           ^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\openai\_base_client.py", line 1278, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\openai\_base_client.py", line 955, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Users\ashwi\miniconda3\Lib\site-packages\openai\_base_client.py", line 1059, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'max_tokens': integer below minimum value. Expected a value >= 1, but got -1 instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'integer_below_min_value'}}
GiulioRossetti commented 6 days ago

max_tokens = -1 is used to avoid restrictions on the length of generated texts. I've never experienced such an error with self hosted models.

sinking8 commented 6 days ago

I am using ChatGPT and I have installed pyautogen=0.2.31

GiulioRossetti commented 6 days ago

Ok, It seems that non self-hosted models require such parameters to be greater than 0 (to provide boundaries on generated text lengths).

Update YClient to v1.0.0 (released today) and change the config.json accordingly.