geekan / MetaGPT

🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
https://deepwisdom.ai/
MIT License
43.33k stars 5.15k forks source link

Avaiable openai models? gpt-3.5-turbo-16k has failed #303

Closed yhyu13 closed 11 months ago

yhyu13 commented 11 months ago

Hi,

Maybe a stupid question, since the api we are using is

openai.ChatCompletion.acreate

chat models listed in this page https://platform.openai.com/docs/models/overview should be our available models at this moment, right?

yhyu13 commented 11 months ago

I tried gpt-3.5-turbo-16k with the docker meta-gpt:v.0.3.1 cli_snake example. it fails, but gpt4 succeeded.

It seems to me gpt3.5 has failed with code syntax check? Am I getting it wrong?

Below is the cmd output:


2023-09-09 14:17:27.444 | INFO | metagpt.config:init:44 - Config loading done. 2023-09-09 14:17:29.222 | INFO | metagpt.software_company:invest:39 - Investment: $3.0. 2023-09-09 14:17:29.222 | INFO | metagpt.roles.role:_act:166 - Alice(Product Manager): ready to WritePRD Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 50, in call result = await fn(*args, kwargs) File "/app/metagpt/metagpt/actions/action.py", line 62, in _aask_v1 instruct_content = output_class(parsed_data) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init

Original Requirements:

The boss wants a command-line interface (CLI) snake game.

Product Goals:

yhyu13 commented 11 months ago

Another try with gpt3.5-turbo which is 4096 token maxed, metagpt fails with token max out even though I set MAX_TOKEN in yaml to be 1500 which is the default option:


2023-09-09 14:32:04.840 | INFO | metagpt.config:init:44 - Config loading done. 2023-09-09 14:32:06.661 | INFO | metagpt.software_company:invest:39 - Investment: $3.0. 2023-09-09 14:32:06.661 | INFO | metagpt.roles.role:_act:166 - Alice(Product Manager): ready to WritePRD 2023-09-09 14:32:37.106 | INFO | metagpt.roles.role:_act:166 - Bob(Architect): ready to WriteDesign 2023-09-09 14:33:10.803 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/competitive_analysis.pdf.. 2023-09-09 14:33:11.884 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/competitive_analysis.svg.. 2023-09-09 14:33:12.930 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/competitive_analysis.png.. 2023-09-09 14:33:14.123 | INFO | metagpt.actions.design_api:_save_prd:110 - Saving PRD to /app/metagpt/workspace/cli_snake_game/docs/prd.md 2023-09-09 14:33:14.125 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/data_api_design.pdf.. 2023-09-09 14:33:15.291 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/data_api_design.svg.. 2023-09-09 14:33:16.437 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/data_api_design.png.. 2023-09-09 14:33:17.760 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/seq_flow.pdf.. 2023-09-09 14:33:18.946 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/seq_flow.svg.. 2023-09-09 14:33:20.042 | INFO | metagpt.utils.mermaid:mermaid_to_file:40 - Generating /app/metagpt/workspace/cli_snake_game/resources/seq_flow.png.. 2023-09-09 14:33:21.408 | INFO | metagpt.actions.design_api:_save_system_design:119 - Saving System Designs to /app/metagpt/workspace/cli_snake_game/docs/system_design.md 2023-09-09 14:33:21.411 | INFO | metagpt.roles.role:_act:166 - Eve(Project Manager): ready to WriteTasks 2023-09-09 14:33:45.424 | INFO | metagpt.actions.write_code:run:77 - Writing snake.py.. 2023-09-09 14:33:59.786 | INFO | metagpt.actions.write_code:run:77 - Writing food.py.. Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 50, in call result = await fn(*args, kwargs) File "/app/metagpt/metagpt/actions/write_code.py", line 71, in write_code code_rsp = await self._aask(prompt) File "/app/metagpt/metagpt/actions/action.py", line 47, in _aask return await self.llm.aask(prompt, system_msgs) File "/app/metagpt/metagpt/provider/base_gpt_api.py", line 44, in aask rsp = await self.acompletion_text(message, stream=True) File "/app/metagpt/metagpt/provider/openai_api.py", line 32, in wrapper return await f(*args, *kwargs) File "/app/metagpt/metagpt/provider/openai_api.py", line 218, in acompletion_text return await self._achat_completion_stream(messages) File "/app/metagpt/metagpt/provider/openai_api.py", line 151, in _achat_completion_stream response = await openai.ChatCompletion.acreate( File "/usr/local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate return await super().acreate(args, kwargs) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_apiresource.py", line 217, in acreate response, , api_key = await requestor.arequest( File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 382, in arequest resp, got_stream = await self._interpret_async_response(result, stream) File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 726, in _interpret_async_response self._interpret_response_line( File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 4308 tokens (2808 in the messages, 1500 in the completion). Please reduce the length of the messages or completion.

The above exception was the direct cause of the following exception:

shenchucheng commented 11 months ago

Due to the uncertainty of LLM's output, we cannot guarantee a successful outcome with every run. However, what we can ascertain is that the success rate of the GPT-4.0 model is higher than that of the GPT-3.5-turbo.

HuntZhaozq commented 10 months ago

pydantic.error_wrappers.ValidationError: 5 validation errors for prd Requirement Pool -> 0 value is not a valid tuple (type=type_error.tuple) Requirement Pool -> 1 value is not a valid tuple (type=type_error.tuple) Requirement Pool -> 2 value is not a valid tuple (type=type_error.tuple) Requirement Pool -> 3 value is not a valid tuple (type=type_error.tuple) Anything UNCLEAR field required (type=value_error.missing)

Hello, I also meet this problem when replace GPT4 with local LLM. Have you solved this problem? How to process this error. Thank you!