Open CGH20171006 opened 6 days ago
Hi @CGH20171006 thanks for the issue. It looks like the gpt-3.5-turbo
inside ToolSelector
is somehow selecting tool names, but not any tool arguments:
TypeError: PaperSearch.paper_search() missing 3 required positional arguments: 'query', 'min_year', and 'max_year'
It looks like you are trying to use gpt-3.5-turbo
on chatapi.midjourney-vip
. Do you know if that API properly supports tool calling?
Hi @CGH20171006 thanks for the issue. It looks like the
gpt-3.5-turbo
insideToolSelector
is somehow selecting tool names, but not any tool arguments:TypeError: PaperSearch.paper_search() missing 3 required positional arguments: 'query', 'min_year', and 'max_year'
It looks like you are trying to use
gpt-3.5-turbo
onchatapi.midjourney-vip
. Do you know if that API properly supports tool calling?
Yes, I know exactly how this API is called via liteLLM. And it passes the following tests:
import litellm
response = litellm.comletion(
model = "gpt-3.5-turbo",
api_base="xxx"
api_key="xxx"
messages=[
{
"role":"user",
"content": "Hey, how's it going",
}
],
)
print(response)
ModelResponse(id='chatcmpl-AAS8gP8VymPABZVRAqpE6PG86oskP', choices=[Choices(finish_reason='stop', index=0, message=Message(content="Hello! I'm just a computer program so I don't have feelings, but I'm here to help. How can I assist you today?", role='assistant', tool_calls=None, function_call=None))], created=1727054810, model='gpt-3.5-turbo-0125', object='chat.completion', system_fingerprint=None, usage=Usage(completion_tokens=29, prompt_tokens=14, total_tokens=43, completion_tokens_details=CompletionTokensDetails(reasoning_tokens=0)), service_tier=None)
What you have there is a chat completion, which is different than tool calling.
Tool calling (also known as function calling) is a different API behavior that chatting. Check this: https://platform.openai.com/docs/guides/function-calling
tool calling
You mean there are two APIs with different functions, one for chat completion and another for tool calling. Is this the case?
It's a different API behavior, same endpoint but different behavior: https://platform.openai.com/docs/api-reference/chat/create
Try running this with your model provider, and let us know what it prints:
import litellm
def get_current_weather(location: str, unit: str = "fahrenheit") -> dict:
return {
"location": location,
"temperature": "72",
"unit": unit,
"forecast": ["sunny", "windy"],
}
response = litellm.completion(
messages=[{"role": "user", "content": "What's the weather like in New York City?"}],
functions=[
{
"name": "get_current_weather",
"description": "Get the current weather information.",
"parameters": {
"type": "object",
"properties": {
"location": {
"description": "Location of the weather.",
"title": "Location",
"type": "string",
},
"unit": {
"default": "fahrenheit",
"description": "Units, either 'celsius' or 'fahrenheit'.",
"title": "Unit",
"type": "string",
},
},
"required": ["location"],
},
}
],
model="gpt-4o",
)
choice_0 = response.choices[0]
assert choice_0.finish_reason == "function_call"
tool_call = choice_0.message.function_call
print(tool_call.name) # Should print: get_current_weather
print(tool_call.arguments) # Should print: {"location":"New York City"}
It's a different API behavior, same endpoint but different behavior: https://platform.openai.com/docs/api-reference/chat/create
Try running this with your model provider, and let us know what it prints:
import litellm def get_current_weather(location: str, unit: str = "fahrenheit") -> dict: return { "location": location, "temperature": "72", "unit": unit, "forecast": ["sunny", "windy"], } response = litellm.completion( messages=[{"role": "user", "content": "What's the weather like in New York City?"}], functions=[ { "name": "get_current_weather", "description": "Get the current weather information.", "parameters": { "type": "object", "properties": { "location": { "description": "Location of the weather.", "title": "Location", "type": "string", }, "unit": { "default": "fahrenheit", "description": "Units, either 'celsius' or 'fahrenheit'.", "title": "Unit", "type": "string", }, }, "required": ["location"], }, } ], model="gpt-4o", ) choice_0 = response.choices[0] assert choice_0.finish_reason == "function_call" tool_call = choice_0.message.function_call print(tool_call.name) # Should print: get_current_weather print(tool_call.arguments) # Should print: {"location":"New York City"}
I call the key through this method:
os.environ["OPENAI_API_KEY"] = "sk-xxxx"
os.environ["OPENAI_API_BASE"] = "https://api.fast-tunnel.one/v1"
He outputs the following: get_current_weather {“location”: “New York City”}
What is the difference between your original api.fast-tunnel.one
and chatapi.midjourney-vip
?
The reason I ask is your original error message shows that tools are being called without any arguments:
...
Failed to execute tool call for tool gather_evidence.
Traceback (most recent call last):
File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call
content = await tool._tool_fn(
^^^^^^^^^^^^^^
TypeError: GatherEvidence.gather_evidence() missing 1 required positional argument: 'question'
Failed to execute tool call for tool paper_search.
Traceback (most recent call last):
File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call
content = await tool._tool_fn(
^^^^^^^^^^^^^^
TypeError: PaperSearch.paper_search() missing 3 required positional arguments: 'query', 'min_year', and 'max_year'
...
Failed to execute tool call for tool gen_answer.
Traceback (most recent call last):
File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call
content = await tool._tool_fn(
^^^^^^^^^^^^^^
TypeError: GenerateAnswer.gen_answer() missing 1 required positional argument: 'question'
So it leads me to believe the model being used isn't calling tools correctly
The reason I ask is your original error message shows that tools are being called without any arguments:我问的原因是您的原始错误消息显示调用工具时没有任何参数:
... Failed to execute tool call for tool gather_evidence. Traceback (most recent call last): File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call content = await tool._tool_fn( ^^^^^^^^^^^^^^ TypeError: GatherEvidence.gather_evidence() missing 1 required positional argument: 'question' Failed to execute tool call for tool paper_search. Traceback (most recent call last): File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call content = await tool._tool_fn( ^^^^^^^^^^^^^^ TypeError: PaperSearch.paper_search() missing 3 required positional arguments: 'query', 'min_year', and 'max_year' ... Failed to execute tool call for tool gen_answer. Traceback (most recent call last): File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call content = await tool._tool_fn( ^^^^^^^^^^^^^^ TypeError: GenerateAnswer.gen_answer() missing 1 required positional argument: 'question'
So it leads me to believe the model being used isn't calling tools correctly因此,这让我相信所使用的模型没有正确调用工具
Not sure what happened, but now my code reports this error
[1;31mGive Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new[0m
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
[1;31mGive Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new[0m
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
[1;31mGive Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new[0m
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
[12:26:22] Agent <aviary.tools.utils.ToolSelector object at 0x000001D68EAE7D90> failed.
┌─────────────────────────────────── Traceback (most recent call last) ────────────────────────────────────┐
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpx\_transports\default.py:72 in │
│ map_httpcore_exceptions │
│ │
│ 69 @contextlib.contextmanager │
│ 70 def map_httpcore_exceptions() -> typing.Iterator[None]: │
│ 71 │ try: │
│ > 72 │ │ yield │
│ 73 │ except Exception as exc: │
│ 74 │ │ mapped_exc = None │
│ 75 │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpx\_transports\default.py:377 in │
│ handle_async_request │
│ │
│ 374 │ │ │ extensions=request.extensions, │
│ 375 │ │ ) │
│ 376 │ │ with map_httpcore_exceptions(): │
│ > 377 │ │ │ resp = await self._pool.handle_async_request(req) │
│ 378 │ │ │
│ 379 │ │ assert isinstance(resp.stream, typing.AsyncIterable) │
│ 380 │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpcore\_async\connection_pool.py:216 in │
│ handle_async_request │
│ │
│ 213 │ │ │ │ closing = self._assign_requests_to_connections() │
│ 214 │ │ │ │
│ 215 │ │ │ await self._close_connections(closing) │
│ > 216 │ │ │ raise exc from None │
│ 217 │ │ │
│ 218 │ │ # Return the response. Note that in this case we still have to manage │
│ 219 │ │ # the point at which the response is closed. │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpcore\_async\connection_pool.py:196 in │
│ handle_async_request │
│ │
│ 193 │ │ │ │ │
│ 194 │ │ │ │ try: │
│ 195 │ │ │ │ │ # Send the request on the assigned connection. │
│ > 196 │ │ │ │ │ response = await connection.handle_async_request( │
│ 197 │ │ │ │ │ │ pool_request.request │
│ 198 │ │ │ │ │ ) │
│ 199 │ │ │ │ except ConnectionNotAvailable: │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpcore\_async\connection.py:99 in │
│ handle_async_request │
│ │
│ 96 │ │ │ │ │ │ ) │
│ 97 │ │ except BaseException as exc: │
│ 98 │ │ │ self._connect_failed = True │
│ > 99 │ │ │ raise exc │
│ 100 │ │ │
│ 101 │ │ return await self._connection.handle_async_request(request) │
│ 102 │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpcore\_async\connection.py:76 in │
│ handle_async_request │
│ │
│ 73 │ │ try: │
│ 74 │ │ │ async with self._request_lock: │
│ 75 │ │ │ │ if self._connection is None: │
│ > 76 │ │ │ │ │ stream = await self._connect(request) │
│ 77 │ │ │ │ │ │
│ 78 │ │ │ │ │ ssl_object = stream.get_extra_info("ssl_object") │
│ 79 │ │ │ │ │ http2_negotiated = ( │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpcore\_async\connection.py:154 in _connect │
│ │
│ 151 │ │ │ │ │ │ "timeout": timeout, │
│ 152 │ │ │ │ │ } │
│ 153 │ │ │ │ │ async with Trace("start_tls", logger, request, kwargs) as trace: │
│ > 154 │ │ │ │ │ │ stream = await stream.start_tls(**kwargs) │
│ 155 │ │ │ │ │ │ trace.return_value = stream │
│ 156 │ │ │ │ return stream │
│ 157 │ │ │ except (ConnectError, ConnectTimeout): │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpcore\_backends\anyio.py:68 in start_tls │
│ │
│ 65 │ │ │ anyio.BrokenResourceError: ConnectError, │
│ 66 │ │ │ anyio.EndOfStream: ConnectError, │
│ 67 │ │ } │
│ > 68 │ │ with map_exceptions(exc_map): │
│ 69 │ │ │ try: │
│ 70 │ │ │ │ with anyio.fail_after(timeout): │
│ 71 │ │ │ │ │ ssl_stream = await anyio.streams.tls.TLSStream.wrap( │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\contextlib.py:158 in __exit__ │
│ │
│ 155 │ │ │ │ # tell if we get the same exception back │
│ 156 │ │ │ │ value = typ() │
│ 157 │ │ │ try: │
│ > 158 │ │ │ │ self.gen.throw(typ, value, traceback) │
│ 159 │ │ │ except StopIteration as exc: │
│ 160 │ │ │ │ # Suppress StopIteration *unless* it's the same exception that │
│ 161 │ │ │ │ # was passed to throw(). This prevents a StopIteration │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpcore\_exceptions.py:14 in map_exceptions │
│ │
│ 11 │ except Exception as exc: # noqa: PIE786 │
│ 12 │ │ for from_exc, to_exc in map.items(): │
│ 13 │ │ │ if isinstance(exc, from_exc): │
│ > 14 │ │ │ │ raise to_exc(exc) from exc │
│ 15 │ │ raise # pragma: nocover │
│ 16 │
│ 17 │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ConnectError
The above exception was the direct cause of the following exception:
┌─────────────────────────────────── Traceback (most recent call last) ────────────────────────────────────┐
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\openai\_base_client.py:1562 in _request │
│ │
│ 1559 │ │ │ kwargs["auth"] = self.custom_auth │
│ 1560 │ │ │
│ 1561 │ │ try: │
│ > 1562 │ │ │ response = await self._client.send( │
│ 1563 │ │ │ │ request, │
│ 1564 │ │ │ │ stream=stream or self._should_stream_response_body(request=request), │
│ 1565 │ │ │ │ **kwargs, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpx\_client.py:1674 in send │
│ │
│ 1671 │ │ │
│ 1672 │ │ auth = self._build_request_auth(request, auth) │
│ 1673 │ │ │
│ > 1674 │ │ response = await self._send_handling_auth( │
│ 1675 │ │ │ request, │
│ 1676 │ │ │ auth=auth, │
│ 1677 │ │ │ follow_redirects=follow_redirects, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpx\_client.py:1702 in _send_handling_auth │
│ │
│ 1699 │ │ │ request = await auth_flow.__anext__() │
│ 1700 │ │ │ │
│ 1701 │ │ │ while True: │
│ > 1702 │ │ │ │ response = await self._send_handling_redirects( │
│ 1703 │ │ │ │ │ request, │
│ 1704 │ │ │ │ │ follow_redirects=follow_redirects, │
│ 1705 │ │ │ │ │ history=history, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpx\_client.py:1739 in │
│ _send_handling_redirects │
│ │
│ 1736 │ │ │ for hook in self._event_hooks["request"]: │
│ 1737 │ │ │ │ await hook(request) │
│ 1738 │ │ │ │
│ > 1739 │ │ │ response = await self._send_single_request(request) │
│ 1740 │ │ │ try: │
│ 1741 │ │ │ │ for hook in self._event_hooks["response"]: │
│ 1742 │ │ │ │ │ await hook(response) │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpx\_client.py:1776 in _send_single_request │
│ │
│ 1773 │ │ │ ) │
│ 1774 │ │ │
│ 1775 │ │ with request_context(request=request): │
│ > 1776 │ │ │ response = await transport.handle_async_request(request) │
│ 1777 │ │ │
│ 1778 │ │ assert isinstance(response.stream, AsyncByteStream) │
│ 1779 │ │ response.request = request │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpx\_transports\default.py:376 in │
│ handle_async_request │
│ │
│ 373 │ │ │ content=request.stream, │
│ 374 │ │ │ extensions=request.extensions, │
│ 375 │ │ ) │
│ > 376 │ │ with map_httpcore_exceptions(): │
│ 377 │ │ │ resp = await self._pool.handle_async_request(req) │
│ 378 │ │ │
│ 379 │ │ assert isinstance(resp.stream, typing.AsyncIterable) │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\contextlib.py:158 in __exit__ │
│ │
│ 155 │ │ │ │ # tell if we get the same exception back │
│ 156 │ │ │ │ value = typ() │
│ 157 │ │ │ try: │
│ > 158 │ │ │ │ self.gen.throw(typ, value, traceback) │
│ 159 │ │ │ except StopIteration as exc: │
│ 160 │ │ │ │ # Suppress StopIteration *unless* it's the same exception that │
│ 161 │ │ │ │ # was passed to throw(). This prevents a StopIteration │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\httpx\_transports\default.py:89 in │
│ map_httpcore_exceptions │
│ │
│ 86 │ │ │ raise │
│ 87 │ │ │
│ 88 │ │ message = str(exc) │
│ > 89 │ │ raise mapped_exc(message) from exc │
│ 90 │
│ 91 │
│ 92 HTTPCORE_EXC_MAP = { │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ConnectError
The above exception was the direct cause of the following exception:
┌─────────────────────────────────── Traceback (most recent call last) ────────────────────────────────────┐
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\llms\OpenAI\openai.py:961 in acompletion │
│ │
│ 958 │ │ │ │ │ }, │
│ 959 │ │ │ │ ) │
│ 960 │ │ │ │ │
│ > 961 │ │ │ │ headers, response = await self.make_openai_chat_completion_request( │
│ 962 │ │ │ │ │ openai_aclient=openai_aclient, data=data, timeout=timeout │
│ 963 │ │ │ │ ) │
│ 964 │ │ │ │ stringified_response = response.model_dump() │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\llms\OpenAI\openai.py:658 in │
│ make_openai_chat_completion_request │
│ │
│ 655 │ │ │ response = raw_response.parse() │
│ 656 │ │ │ return headers, response │
│ 657 │ │ except Exception as e: │
│ > 658 │ │ │ raise e │
│ 659 │ │
│ 660 │ def make_sync_openai_chat_completion_request( │
│ 661 │ │ self, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\llms\OpenAI\openai.py:646 in │
│ make_openai_chat_completion_request │
│ │
│ 643 │ │ """ │
│ 644 │ │ try: │
│ 645 │ │ │ raw_response = ( │
│ > 646 │ │ │ │ await openai_aclient.chat.completions.with_raw_response.create( │
│ 647 │ │ │ │ │ **data, timeout=timeout │
│ 648 │ │ │ │ ) │
│ 649 │ │ │ ) │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\openai\_legacy_response.py:370 in wrapped │
│ │
│ 367 │ │ │
│ 368 │ │ kwargs["extra_headers"] = extra_headers │
│ 369 │ │ │
│ > 370 │ │ return cast(LegacyAPIResponse[R], await func(*args, **kwargs)) │
│ 371 │ │
│ 372 │ return wrapped │
│ 373 │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\openai\resources\chat\completions.py:1412 in │
│ create │
│ │
│ 1409 │ │ timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN, │
│ 1410 │ ) -> ChatCompletion | AsyncStream[ChatCompletionChunk]: │
│ 1411 │ │ validate_response_format(response_format) │
│ > 1412 │ │ return await self._post( │
│ 1413 │ │ │ "/chat/completions", │
│ 1414 │ │ │ body=await async_maybe_transform( │
│ 1415 │ │ │ │ { │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\openai\_base_client.py:1829 in post │
│ │
│ 1826 │ │ opts = FinalRequestOptions.construct( │
│ 1827 │ │ │ method="post", url=path, json_data=body, files=await │
│ async_to_httpx_files(files), **options │
│ 1828 │ │ ) │
│ > 1829 │ │ return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) │
│ 1830 │ │
│ 1831 │ async def patch( │
│ 1832 │ │ self, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\openai\_base_client.py:1523 in request │
│ │
│ 1520 │ │ else: │
│ 1521 │ │ │ retries_taken = 0 │
│ 1522 │ │ │
│ > 1523 │ │ return await self._request( │
│ 1524 │ │ │ cast_to=cast_to, │
│ 1525 │ │ │ options=options, │
│ 1526 │ │ │ stream=stream, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\openai\_base_client.py:1596 in _request │
│ │
│ 1593 │ │ │ │ ) │
│ 1594 │ │ │ │
│ 1595 │ │ │ log.debug("Raising connection error") │
│ > 1596 │ │ │ raise APIConnectionError(request=request) from err │
│ 1597 │ │ │
│ 1598 │ │ log.debug( │
│ 1599 │ │ │ 'HTTP Request: %s %s "%i %s"', request.method, request.url, │
│ response.status_code, response.reason_phrase │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────┘
APIConnectionError: Connection error.
During handling of the above exception, another exception occurred:
┌─────────────────────────────────── Traceback (most recent call last) ────────────────────────────────────┐
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\main.py:428 in acompletion │
│ │
│ 425 │ │ │ │ │ response = ModelResponse(**init_response) │
│ 426 │ │ │ │ response = init_response │
│ 427 │ │ │ elif asyncio.iscoroutine(init_response): │
│ > 428 │ │ │ │ response = await init_response │
│ 429 │ │ │ else: │
│ 430 │ │ │ │ response = init_response # type: ignore │
│ 431 │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\llms\OpenAI\openai.py:1008 in │
│ acompletion │
│ │
│ 1005 │ │ │ except Exception as e: │
│ 1006 │ │ │ │ status_code = getattr(e, "status_code", 500) │
│ 1007 │ │ │ │ error_headers = getattr(e, "headers", None) │
│ > 1008 │ │ │ │ raise OpenAIError( │
│ 1009 │ │ │ │ │ status_code=status_code, message=str(e), headers=error_headers │
│ 1010 │ │ │ │ ) │
│ 1011 │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────┘
OpenAIError: Connection error.
During handling of the above exception, another exception occurred:
┌─────────────────────────────────── Traceback (most recent call last) ────────────────────────────────────┐
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\paperqa\agents\main.py:212 in run_aviary_agent │
│ │
│ 209 │ │ │ │
│ 210 │ │ │ while not done: │
│ 211 │ │ │ │ agent_state.messages += obs │
│ > 212 │ │ │ │ for attempt in Retrying( │
│ 213 │ │ │ │ │ stop=stop_after_attempt(5), │
│ 214 │ │ │ │ │ retry=retry_if_exception_type(MalformedMessageError), │
│ 215 │ │ │ │ │ before_sleep=before_sleep_log(logger, logging.WARNING), │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\tenacity\__init__.py:347 in __iter__ │
│ │
│ 344 │ │ │
│ 345 │ │ retry_state = RetryCallState(self, fn=None, args=(), kwargs={}) │
│ 346 │ │ while True: │
│ > 347 │ │ │ do = self.iter(retry_state=retry_state) │
│ 348 │ │ │ if isinstance(do, DoAttempt): │
│ 349 │ │ │ │ yield AttemptManager(retry_state=retry_state) │
│ 350 │ │ │ elif isinstance(do, DoSleep): │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\tenacity\__init__.py:314 in iter │
│ │
│ 311 │ │ │
│ 312 │ │ is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain) │
│ 313 │ │ if not (is_explicit_retry or self.retry(retry_state)): │
│ > 314 │ │ │ return fut.result() │
│ 315 │ │ │
│ 316 │ │ if self.after is not None: │
│ 317 │ │ │ self.after(retry_state) │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\concurrent\futures\_base.py:449 in result │
│ │
│ 446 │ │ │ │ if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]: │
│ 447 │ │ │ │ │ raise CancelledError() │
│ 448 │ │ │ │ elif self._state == FINISHED: │
│ > 449 │ │ │ │ │ return self.__get_result() │
│ 450 │ │ │ │ │
│ 451 │ │ │ │ self._condition.wait(timeout) │
│ 452 │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\concurrent\futures\_base.py:401 in __get_result │
│ │
│ 398 │ def __get_result(self): │
│ 399 │ │ if self._exception: │
│ 400 │ │ │ try: │
│ > 401 │ │ │ │ raise self._exception │
│ 402 │ │ │ finally: │
│ 403 │ │ │ │ # Break a reference cycle with the exception in self._exception │
│ 404 │ │ │ │ self = None │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\paperqa\agents\main.py:219 in run_aviary_agent │
│ │
│ 216 │ │ │ │ │ reraise=True, │
│ 217 │ │ │ │ ): │
│ 218 │ │ │ │ │ with attempt: # Retrying if ToolSelector fails to select a tool │
│ > 219 │ │ │ │ │ │ action = await agent(agent_state.messages, tools) │
│ 220 │ │ │ │ agent_state.messages = [*agent_state.messages, action] │
│ 221 │ │ │ │ if on_agent_action_callback: │
│ 222 │ │ │ │ │ await on_agent_action_callback(action, agent_state) │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\aviary\tools\utils.py:54 in __call__ │
│ │
│ 51 │ │ self, messages: list[Message], tools: list[Tool] │
│ 52 │ ) -> ToolRequestMessage: │
│ 53 │ │ """Run a completion that selects a tool in tools given the messages.""" │
│ > 54 │ │ model_response = await self._bound_acompletion( │
│ 55 │ │ │ messages=MessagesAdapter.dump_python( │
│ 56 │ │ │ │ messages, exclude_none=True, by_alias=True │
│ 57 │ │ │ ), │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\router.py:735 in acompletion │
│ │
│ 732 │ │ │ │ │ original_exception=e, │
│ 733 │ │ │ │ ) │
│ 734 │ │ │ ) │
│ > 735 │ │ │ raise e │
│ 736 │ │
│ 737 │ async def _acompletion( │
│ 738 │ │ self, model: str, messages: List[Dict[str, str]], **kwargs │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\router.py:723 in acompletion │
│ │
│ 720 │ │ │ ): │
│ 721 │ │ │ │ response = await self.schedule_acompletion(**kwargs) │
│ 722 │ │ │ else: │
│ > 723 │ │ │ │ response = await self.async_function_with_fallbacks(**kwargs) │
│ 724 │ │ │ │
│ 725 │ │ │ return response │
│ 726 │ │ except Exception as e: │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\router.py:3039 in │
│ async_function_with_fallbacks │
│ │
│ 3036 │ │ │ │ │ │ ) │
│ 3037 │ │ │ │ │ ) │
│ 3038 │ │ │ │
│ > 3039 │ │ │ raise original_exception │
│ 3040 │ │
│ 3041 │ async def async_function_with_retries(self, *args, **kwargs): │
│ 3042 │ │ verbose_router_logger.debug( │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\router.py:2893 in │
│ async_function_with_fallbacks │
│ │
│ 2890 │ │ │ │ │ │ Context_Policy_Fallbacks={content_policy_fallbacks}", │
│ 2891 │ │ │ │ ) │
│ 2892 │ │ │ │
│ > 2893 │ │ │ response = await self.async_function_with_retries(*args, **kwargs) │
│ 2894 │ │ │ verbose_router_logger.debug(f"Async Response: {response}") │
│ 2895 │ │ │ return response │
│ 2896 │ │ except Exception as e: │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\router.py:3170 in │
│ async_function_with_retries │
│ │
│ 3167 │ │ │ │ setattr(original_exception, "max_retries", num_retries) │
│ 3168 │ │ │ │ setattr(original_exception, "num_retries", current_attempt) │
│ 3169 │ │ │ │
│ > 3170 │ │ │ raise original_exception │
│ 3171 │ │
│ 3172 │ def should_retry_this_error( │
│ 3173 │ │ self, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\router.py:3083 in │
│ async_function_with_retries │
│ │
│ 3080 │ │ │ │ │ message=f"This is a mock exception for model={model_group}, to │
│ trigger a rate limit error.", │
│ 3081 │ │ │ │ ) │
│ 3082 │ │ │ # if the function call is successful, no exception will be raised and we'll │
│ break out of the loop │
│ > 3083 │ │ │ response = await original_function(*args, **kwargs) │
│ 3084 │ │ │ return response │
│ 3085 │ │ except Exception as e: │
│ 3086 │ │ │ current_attempt = None │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\router.py:874 in _acompletion │
│ │
│ 871 │ │ │ ) │
│ 872 │ │ │ if model_name is not None: │
│ 873 │ │ │ │ self.fail_calls[model_name] += 1 │
│ > 874 │ │ │ raise e │
│ 875 │ │
│ 876 │ async def abatch_completion( │
│ 877 │ │ self, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\router.py:846 in _acompletion │
│ │
│ 843 │ │ │ │ await self.async_routing_strategy_pre_call_checks( │
│ 844 │ │ │ │ │ deployment=deployment, logging_obj=logging_obj │
│ 845 │ │ │ │ ) │
│ > 846 │ │ │ │ response = await _response │
│ 847 │ │ │ │
│ 848 │ │ │ ## CHECK CONTENT FILTER ERROR ## │
│ 849 │ │ │ if isinstance(response, ModelResponse): │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\utils.py:1589 in wrapper_async │
│ │
│ 1586 │ │ │ │ │ else: │
│ 1587 │ │ │ │ │ │ kwargs["model"] = context_window_fallback_dict[model] │
│ 1588 │ │ │ │ │ return await original_function(*args, **kwargs) │
│ > 1589 │ │ │ raise e │
│ 1590 │ │
│ 1591 │ is_coroutine = inspect.iscoroutinefunction(original_function) │
│ 1592 │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\utils.py:1409 in wrapper_async │
│ │
│ 1406 │ │ │ │ │ │ │ ).start() │
│ 1407 │ │ │ │ │ │ │ return final_embedding_cached_response │
│ 1408 │ │ │ # MODEL CALL │
│ > 1409 │ │ │ result = await original_function(*args, **kwargs) │
│ 1410 │ │ │ end_time = datetime.datetime.now() │
│ 1411 │ │ │ if "stream" in kwargs and kwargs["stream"] is True: │
│ 1412 │ │ │ │ if ( │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\main.py:450 in acompletion │
│ │
│ 447 │ │ return response │
│ 448 │ except Exception as e: │
│ 449 │ │ custom_llm_provider = custom_llm_provider or "openai" │
│ > 450 │ │ raise exception_type( │
│ 451 │ │ │ model=model, │
│ 452 │ │ │ custom_llm_provider=custom_llm_provider, │
│ 453 │ │ │ original_exception=e, │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\utils.py:8199 in exception_type │
│ │
│ 8196 │ │ # don't let an error with mapping interrupt the user from receiving an error │
│ from the llm api calls │
│ 8197 │ │ if exception_mapping_worked: │
│ 8198 │ │ │ setattr(e, "litellm_response_headers", litellm_response_headers) │
│ > 8199 │ │ │ raise e │
│ 8200 │ │ else: │
│ 8201 │ │ │ for error_type in litellm.LITELLM_EXCEPTION_TYPES: │
│ 8202 │ │ │ │ if isinstance(e, error_type): │
│ │
│ C:\Users\20171006\.conda\envs\paperqa\Lib\site-packages\litellm\utils.py:6583 in exception_type │
│ │
│ 6580 │ │ │ │ │ │ ) │
│ 6581 │ │ │ │ │ else: │
│ 6582 │ │ │ │ │ │ exception_mapping_worked = True │
│ > 6583 │ │ │ │ │ │ raise APIError( │
│ 6584 │ │ │ │ │ │ │ status_code=original_exception.status_code, │
│ 6585 │ │ │ │ │ │ │ message=f"APIError: {exception_provider} - {message}", │
│ 6586 │ │ │ │ │ │ │ llm_provider=custom_llm_provider, │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────┘
APIError: litellm.APIError: APIError: OpenAIException - Connection error.
Received Model Group=my_llm_model
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2
[12:26:26] Answer:
很抱歉打扰你们,但是当我按照 #433 中的建议调用 ask 时,它表明缺少与问题相关的参数。事实上,我做了很多尝试来绕过 ask 方法,例如使用 docs。但仍然失败 (#451)。
import os from paperqa import Docs,ask from paperqa.settings import Settings, AgentSettings, AnswerSettings import paperscraper #os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx" #os.environ["OPENAI_API_BASE"] = "https://chatapi.midjourney-vip.cn/v1" local_llm_config = dict( model_list=[dict( model_name="my_llm_model", litellm_params=dict( model="gpt-3.5-turbo", api_base="https://chatapi.midjourney-vip.cn/v1", api_key="sk-d55CRFWuSZtCU6Nv7a3505525a9b4b0f820f215b0545504d", temperature=0.1, frequency_penalty=1.5, max_tokens=512, ), )] ) settings = Settings( llm="my_llm_model", llm_config=local_llm_config, summary_llm="my_llm_model", summary_llm_config=local_llm_config, paper_directory="E:\\Programing\\pythonProject", agent=AgentSettings(agent_llm_config=local_llm_config, agent_llm="my_llm_model", agent_type="ToolSelector"), answer=AnswerSettings(evidence_k=3) # optional ) answer = ask( "What manufacturing challenges are unique to bispecific antibodies?", settings=settings, )
错误信息:
[00:25:31] Could not find cost for model my_llm_model. Failed to execute tool call for tool gather_evidence. Traceback (most recent call last): File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call content = await tool._tool_fn( ^^^^^^^^^^^^^^ TypeError: GatherEvidence.gather_evidence() missing 1 required positional argument: 'question' Failed to execute tool call for tool paper_search. Traceback (most recent call last): File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call content = await tool._tool_fn( ^^^^^^^^^^^^^^ TypeError: PaperSearch.paper_search() missing 3 required positional arguments: 'query', 'min_year', and 'max_year' [00:25:32] Could not find cost for model my_llm_model. Answer: Failed to execute tool call for tool gen_answer. Traceback (most recent call last): File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\aviary\env.py", line 197, in _exec_tool_call content = await tool._tool_fn( ^^^^^^^^^^^^^^ TypeError: GenerateAnswer.gen_answer() missing 1 required positional argument: 'question'
请问你解决了这个问题了吗?就是如何换其他大模型运行该项目。
I'm very sorry to bother you all, but when I call ask as proposed in #433, it suggests that the question related parameter is missing. In fact, I made a lot of attempts to bypass the ask method, e.g. using docs. but still failed (#451).
error message: