OpenBMB / ChatDev

Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
https://arxiv.org/abs/2307.07924
Apache License 2.0
25.29k stars 3.17k forks source link

Error max tokens is too large #380

Closed TheNha closed 4 months ago

TheNha commented 4 months ago
Traceback (most recent call last):
  File "/home/coreai/anaconda3/envs/softdev-llm/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/home/coreai/nhant/GIT_CODE/LLMs/SoftwareDevelopment/ChatDev/camel/utils.py", line 154, in wrapper
    return func(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/coreai/nhant/GIT_CODE/LLMs/SoftwareDevelopment/ChatDev/camel/agents/chat_agent.py", line 240, in step
    response = self.model_backend.run(messages=openai_messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/coreai/nhant/GIT_CODE/LLMs/SoftwareDevelopment/ChatDev/camel/model_backend.py", line 101, in run
    response = client.chat.completions.create(*args, **kwargs, model=self.model_type.value,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/coreai/anaconda3/envs/softdev-llm/lib/python3.11/site-packages/openai/_utils/_utils.py", line 277, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/coreai/anaconda3/envs/softdev-llm/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 579, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/coreai/anaconda3/envs/softdev-llm/lib/python3.11/site-packages/openai/_base_client.py", line 1240, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/coreai/anaconda3/envs/softdev-llm/lib/python3.11/site-packages/openai/_base_client.py", line 921, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/coreai/anaconda3/envs/softdev-llm/lib/python3.11/site-packages/openai/_base_client.py", line 1020, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'max_tokens is too large: 99549. This model supports at most 4096 completion tokens, whereas you provided 99549.', 'type': None, 'param': 'max_tokens', 'code': None}}

I got this error. I used gpt-4-turbo model, num_prompt_tokens is 451, num_max_completion_tokens is 99549. I see self.model_config_dict['max_tokens'] = num_max_completion_tokens.

thinkwee commented 4 months ago
image

Please use the formal version instead of preview version. The gpt-4-turbo-preview limits the output in 4096 tokens.

TheNha commented 4 months ago

I'm using new version code and default config is gpt-4-turbo. chatdev

How to fix this error. Thank you. @thinkwee

TheNha commented 4 months ago

If I comment the max_tokens configuration here then it work. I don't understand adding this configuration, How can everyone still run it?

image

thinkwee commented 4 months ago

hi, thenha,

  1. you can manually set the num_max_token = 4096 in line 96 or just comment it as you have tried.
  2. we add max_tokens parameters since the older version of openai api requires it be explicitly defined.
  3. the gpt-4-turbo you are using is not real gpt-4-turbo, but gpt-4-turbo-preview, so it returns "This model supports at most 4096 completion tokens", you can check it with your openai api provider.
TheNha commented 4 months ago

Thank you @thinkwee . I commented it then work for me.