corca-ai / EVAL

EVAL(Elastic Versatile Agent with Langchain) will execute all your requests. Just like an eval method!
MIT License
869 stars 82 forks source link

openai.error.InvalidRequestError: Invalid URL (POST /v1/chat/completions) #46

Closed calseus closed 1 year ago

calseus commented 1 year ago

[2023-04-11 21:45:59,565: INFO/Process-1] Task task_execute[1ea47a5e-9bd1-47fc-a845-102dfd26e997] received eval | [2023-04-11 21:46:00,180: INFO/ForkPoolWorker-7] Entering new chain. eval | [2023-04-11 21:46:00,180: INFO/ForkPoolWorker-7] Prompted Text: say hello eval | eval | [2023-04-11 21:46:00,523: INFO/ForkPoolWorker-7] error_code=None error_message='Invalid URL (POST /v1/chat/completions)' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False eval | [2023-04-11 21:46:00,524: ERROR/ForkPoolWorker-7] Chain Error: Invalid URL (POST /v1/chat/completions) eval | [2023-04-11 21:46:00,529: ERROR/ForkPoolWorker-7] Task task_execute[1ea47a5e-9bd1-47fc-a845-102dfd26e997] raised unexpected: InvalidRequestError(message='Invalid URL (POST /v1/chat/completions)', param=None, code=None, http_status=404, request_id=None) eval | Traceback (most recent call last): eval | File "/app/.venv/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task eval | R = retval = fun(*args, kwargs) eval | File "/app/.venv/lib/python3.10/site-packages/celery/app/trace.py", line 734, in protected_call__ eval | return self.run(*args, **kwargs) eval | File "/app/api/worker.py", line 22, in task_execute eval | response = executor({"input": prompt}) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in call eval | raise e eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in call eval | outputs = self._call(inputs) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 505, in _call eval | next_step_output = self._take_next_step( eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 409, in _take_next_step eval | output = self.agent.plan(intermediate_steps, inputs) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 105, in plan eval | action = self._get_next_action(full_inputs) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 66, in _get_next_action eval | full_output = self.llm_chain.predict(full_inputs) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 151, in predict eval | return self(kwargs)[self.output_key] eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in call eval | raise e eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in call eval | outputs = self._call(inputs) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 57, in _call eval | return self.apply([inputs])[0] eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply eval | response = self.generate(input_list) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate eval | return self.llm.generate_prompt(prompts, stop) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 72, in generate_prompt eval | raise e eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 69, in generate_prompt eval | output = self.generate(prompt_messages, stop=stop) eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 50, in generate eval | results = [self._generate(m, stop=stop) for m in messages] eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 50, in eval | results = [self._generate(m, stop=stop) for m in messages] eval | File "/app/core/agents/llm.py", line 283, in _generate eval | response = self.completion_with_retry(messages=message_dicts, params) eval | File "/app/core/agents/llm.py", line 252, in completion_with_retry eval | return _completion_with_retry(kwargs) eval | File "/app/.venv/lib/python3.10/site-packages/tenacity/init__.py", line 289, in wrapped_f eval | return self(f, *args, kw) eval | File "/app/.venv/lib/python3.10/site-packages/tenacity/init.py", line 379, in call eval | do = self.iter(retry_state=retry_state) eval | File "/app/.venv/lib/python3.10/site-packages/tenacity/init.py", line 314, in iter eval | return fut.result() eval | File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result eval | return self.get_result() eval | File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in get_result eval | raise self._exception eval | File "/app/.venv/lib/python3.10/site-packages/tenacity/init.py", line 382, in call eval | result = fn(args, kwargs) eval | File "/app/core/agents/llm.py", line 248, in _completion_with_retry eval | response = self.client.create(kwargs) eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create eval | return super().create(args, **kwargs) eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_apiresource.py", line 153, in create eval | response, , api_key = requestor.request( eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request eval | resp, got_stream = self._interpret_response(result, stream) eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response eval | self._interpret_response_line( eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line eval | raise self.handle_error_response( eval | openai.error.InvalidRequestError: Invalid URL (POST /v1/chat/completions) eval | INFO: 172.18.0.1:56778 - "GET /api/execute/async/1ea47a5e-9bd1-47fc-a845-102dfd26e997 HTTP/1.1" 200 OK

adldotori commented 1 year ago

Could you check this page?