OpenBMB / ChatDev

Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
https://arxiv.org/abs/2307.07924
Apache License 2.0
24.93k stars 3.12k forks source link

max_tokens_exceeded_by_camel #25

Closed wdy06 closed 11 months ago

wdy06 commented 11 months ago

Thanks to nice project !

I wanted to run this project but I got this Error while running run.py. Is there any way to avoid this error?

Traceback (most recent call last):
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/run.py", line 111, in <module>
    chat_chain.execute_chain()
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/chatdev/chat_chain.py", line 164, in execute_chain
    self.execute_step(phase_item)
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/chatdev/chat_chain.py", line 153, in execute_step
    self.chat_env = compose_phase_instance.execute(self.chat_env)
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/chatdev/composed_phase.py", line 150, in execute
    chat_env = self.phases[phase].execute(chat_env,
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/chatdev/phase.py", line 294, in execute
    self.chatting(chat_env=chat_env,
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/chatdev/utils.py", line 77, in wrapper
    return func(*args, **kwargs)
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/chatdev/phase.py", line 137, in chatting
    if isinstance(assistant_response.msg, ChatMessage):
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/camel/agents/chat_agent.py", line 53, in msg
    raise RuntimeError("error in ChatAgentResponse, info:{}".format(str(self.info)))
RuntimeError: error in ChatAgentResponse, info:{'id': None, 'usage': None, 'termination_reasons': ['max_tokens_exceeded_by_camel'], 'num_tokens': 16398}
thinkwee commented 11 months ago

Thank you for your feedback. Could you please provide the log file (which is under the /WareHouse path)

wdy06 commented 11 months ago

Thank you for your respond. This is log file. othello_v2_DefaultOrganization_20230908103617.log

thinkwee commented 11 months ago

The reason for this error is that agents write a long code, and in each review turn it will see the current code and all old versions, which makes the context exceed the limits of 16384 tokens in GPT3.5 Turbo.

I just made a little fix on the complete token calculation, which may resolve the token exceed problem (does not exceed but reports exceed error) in multi-turn chatting, but it does not solve your problem since in your log the token does exceed the limit. Here are some suggestions:

  1. Try again! I just repeated your task prompt and did not encounter an exceeded problem, the log is here: test_DefaultOrganization_20230908101138.log
  2. Use GPT4_32k, which has a context of 32768 tokens. Money always solves the problem :)
  3. Lower the chatting turn in the review Phase.
  4. Close the self-improve. Self-improve can make prompts with higher quality but it often proposes too many requirements and makes agents write long and complex codes.
  5. In your task prompt try some expressions like "write a simple software" or "no more than 50 lines in each code file"
  6. We may consider using git diff messages in the review Phase, which would greatly reduce the tokens.
wdy06 commented 11 months ago

Thank you for your quick fix and helpful suggestions ! I' ll try again!

wdy06 commented 11 months ago

i got new error while running. This error seems to be caused by a previous modification.

Traceback (most recent call last):
  File "/cache/pypoetry/virtualenvs/chatdev-JP5cuhws-py3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/camel/utils.py", line 145, in wrapper
    return func(self, *args, **kwargs)
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/camel/agents/chat_agent.py", line 191, in step
    response = self.model_backend.run(messages=openai_messages)
  File "/home/jovyan/ghq/github.com/OpenBMB/ChatDev/camel/model_backend.py", line 69, in run
    response = openai.ChatCompletion.create(*args, **kwargs,
  File "/cache/pypoetry/virtualenvs/chatdev-JP5cuhws-py3.10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/cache/pypoetry/virtualenvs/chatdev-JP5cuhws-py3.10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/cache/pypoetry/virtualenvs/chatdev-JP5cuhws-py3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/cache/pypoetry/virtualenvs/chatdev-JP5cuhws-py3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/cache/pypoetry/virtualenvs/chatdev-JP5cuhws-py3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: -145 is less than the minimum of 1 - 'max_tokens'
thinkwee commented 11 months ago

Could you please share the log? It seems that the num_max_completion_tokens is less than 0, which means the number of tokens sent to GPT still exceeds the limit.

wdy06 commented 11 months ago

The log is here: othello_v2_DefaultOrganization_20230908135830.log

thinkwee commented 11 months ago

It is the same problem, the agent write too many codes and in the review Phase it exceeds the limit. You can check out how many tokens are sent to and received from LLM in the log. The messages are like OpenAI_Usage_Info. For example, the last token statistic in your log is:

`` [2023-08-09 14:05:29 INFO] [OpenAI_Usage_Info Receive]

prompt_tokens: 16248

completion_tokens: 10

total_tokens: 16258 `` In the next turn the chatting exceeds the limit of 16384 I found that in your log the review loop for 5 times, you can lower this number, the default is 3.

wdy06 commented 11 months ago

Thank you for your reaction! I' ll try various configs!