Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
MIT License
166.57k stars 44.09k forks source link

Maximum context length exceeded after `google_search` #2871

Closed xixixi2000 closed 1 year ago

xixixi2000 commented 1 year ago

⚠️ Search for existing issues first ⚠️

Which Operating System are you using?

Linux

Which version of Auto-GPT are you using?

Latest Release

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

After Auto-GPT make 2 or 3 repeated queries, the result is not so big, no split_text called , but the error occurs, Auto-GPT adds the 2-3 results together ?

THOUGHTS: I think we should start by searching for some popular tourist destinations in Xinjiang and then plan our itinerary accordingly. We can also look for some car rental services in the area. REASONING: By searching for popular tourist destinations, we can get an idea of what places are worth visiting and plan our itinerary accordingly. Additionally, by looking for car rental services, we can determine the feasibility of renting a car for the trip. PLAN:

Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/Auto-GPT/autogpt/main.py", line 5, in autogpt.cli.main() File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1130, in call return self.main(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1635, in invoke rv = super().invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 760, in invoke return __callback(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, *kwargs) File "/root/Auto-GPT/autogpt/cli.py", line 151, in main agent.start_interaction_loop() File "/root/Auto-GPT/autogpt/agent/agent.py", line 75, in start_interaction_loop assistant_reply = chat_with_ai( File "/root/Auto-GPT/autogpt/chat.py", line 85, in chat_with_ai else permanent_memory.get_relevant(str(full_message_history[-9:]), 10) File "/root/Auto-GPT/autogpt/memory/local.py", line 124, in get_relevant embedding = create_embedding_with_ada(text) File "/root/Auto-GPT/autogpt/llm_utils.py", line 155, in create_embedding_with_ada return openai.Embedding.create( File "/usr/local/lib/python3.9/dist-packages/openai/api_resources/embedding.py", line 33, in create response = super().create(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/openai/api_resources/abstract/engine_apiresource.py", line 153, in create response, , api_key = requestor.request( File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 9564 tokens (9564 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

Current behavior 😯

No response

Expected behavior 🤔

No response

Your prompt 📝

# Paste your prompt here

Name: 小吉导游 Role: 设计一个从乌鲁木齐出发和结束的新疆10日旅游详细方案,使用中文回复和搜索 Goals: ['使用中文回复和采用中文关键字搜索', '包含详细的路线和价格预算', '适合暑假7月底出游', '休闲不要太赶路', '适合包车或者自驾']

Your Logs 📒

<insert your logs here>

THOUGHTS: I think we should start by searching for some popular tourist destinations in Xinjiang and then plan our itinerary accordingly. We can also look for some car rental services in the area. REASONING: By searching for popular tourist destinations, we can get an idea of what places are worth visiting and plan our itinerary accordingly. Additionally, by looking for car rental services, we can determine the feasibility of renting a car for the trip. PLAN:

Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/Auto-GPT/autogpt/main.py", line 5, in autogpt.cli.main() File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1130, in call return self.main(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1635, in invoke rv = super().invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 760, in invoke return __callback(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, *kwargs) File "/root/Auto-GPT/autogpt/cli.py", line 151, in main agent.start_interaction_loop() File "/root/Auto-GPT/autogpt/agent/agent.py", line 75, in start_interaction_loop assistant_reply = chat_with_ai( File "/root/Auto-GPT/autogpt/chat.py", line 85, in chat_with_ai else permanent_memory.get_relevant(str(full_message_history[-9:]), 10) File "/root/Auto-GPT/autogpt/memory/local.py", line 124, in get_relevant embedding = create_embedding_with_ada(text) File "/root/Auto-GPT/autogpt/llm_utils.py", line 155, in create_embedding_with_ada return openai.Embedding.create( File "/usr/local/lib/python3.9/dist-packages/openai/api_resources/embedding.py", line 33, in create response = super().create(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/openai/api_resources/abstract/engine_apiresource.py", line 153, in create response, , api_key = requestor.request( File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 9564 tokens (9564 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

Tonic3 commented 1 year ago

I just got a similar issue I think on windows,

Describe Request failed with status code: 401 Response content: b'{"detail":{"status":"quota_exceeded","message":"This request exceeds your quota. You have 19 characters remaining, while 43 characters are required for this request.","character_used":10024,"character_limit":10000}}'

xixixi2000 commented 1 year ago

I just got a similar issue I think on windows,

Describe Request failed with status code: 401 Response content: b'{"detail":{"status":"quota_exceeded","message":"This request exceeds your quota. You have 19 characters remaining, while 43 characters are required for this request.","character_used":10024,"character_limit":10000}}'

try check https://platform.openai.com/account/usage

Tonic3 commented 1 year ago

Usage is fine under 1.00 and it’s all good there, continue to see this error pop up, it then gets stuck in a loop. This is 3.5 mode only

xixixi2000 commented 1 year ago

got this error today after pulling the newest code

THOUGHTS: 我们可以使用google命令搜索一些旅游路线和价格预算,以及自驾或包车的相关信息。我们可以使用以下关键字进行搜索:新疆旅游路线、新疆旅游价格预算、新疆自驾游攻略、新疆包车旅游攻略。 REASONING: 通过使用google命令搜索一些旅游路线和价格预算,以及自驾或包车的相关信息,我们可以更好地制定旅游计划。 PLAN:

Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/Auto-GPT/autogpt/main.py", line 5, in autogpt.cli.main() File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1130, in call return self.main(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1635, in invoke rv = super().invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 760, in invoke return __callback(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, *kwargs) File "/root/Auto-GPT/autogpt/cli.py", line 151, in main agent.start_interaction_loop() File "/root/Auto-GPT/autogpt/agent/agent.py", line 75, in start_interaction_loop assistant_reply = chat_with_ai( File "/root/Auto-GPT/autogpt/chat.py", line 85, in chat_with_ai else permanent_memory.get_relevant(str(full_message_history[-9:]), 10) File "/root/Auto-GPT/autogpt/memory/local.py", line 124, in get_relevant embedding = create_embedding_with_ada(text) File "/root/Auto-GPT/autogpt/llm_utils.py", line 155, in create_embedding_with_ada return openai.Embedding.create( File "/usr/local/lib/python3.9/dist-packages/openai/api_resources/embedding.py", line 33, in create response = super().create(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/openai/api_resources/abstract/engine_apiresource.py", line 153, in create response, , api_key = requestor.request( File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.9/dist-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 8743 tokens (8743 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.