Open FahaxikiHoney opened 3 days ago
I found a similar issue that might help you resolve the error you're encountering. The error you're seeing suggests that the ask
function is causing issues with asynchronous operations. To resolve this, you should use the agent_query
function instead of ask
. The ask
function is a synchronous wrapper, and for asynchronous operations, you need to use the agent_query
function directly.
Here is an example of how to use agent_query
for asynchronous operations:
from paperqa import Settings, agent_query, QueryRequest
async def main():
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3",
"litellm_params": {
"model": "ollama/llama3",
"api_base": "http://localhost:11434"
}
}
]
}
answer = await agent_query(
QueryRequest(
query="What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm="ollama/llama3",
llm_config=local_llm_config,
summary_llm="ollama/llama3",
summary_llm_config=local_llm_config,
),
)
)
print(answer.formatted_answer)
# Run the async function
import asyncio
asyncio.run(main())
This code snippet sets up the agent_query
function with the appropriate settings for the ollama/llama3
model and litellm
settings, and runs it asynchronously. The ask
function internally uses get_loop().run_until_complete
to run the agent_query
function synchronously, which can cause issues in an already running event loop. Therefore, using agent_query
directly in an asynchronous context is the correct approach [1][2].
To continue talking to Dosu, mention @dosu.
Based on your error, it looks like you hit CTRL-C (per the KeyboardInterrupt
) during the run
from paperqa import Settings, ask import os os.environ["OPENAI_API_KEY"] = "EMPTY" local_llm_config = { "model_list": [ { "model_name": "ollama/llama3", "litellm_params": { "model": "ollama/llama3", "api_base": } } ] }
answer = ask( "What manufacturing challenges are unique to bispecific antibodies?", settings=Settings( llm="ollama/llama3", llm_config=local_llm_config, summary_llm="ollama/llama3", summary_llm_config=local_llm_config, ), )
When I run the above code, I get an error. The error is as follows
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Traceback (most recent call last): File "D:\Study\论文\检索增强\paper-qa-main\1.py", line 16, in
answer = ask(
^^^^
File "D:\Study\论文\检索增强\paper-qa-main\paperqa\agents__init__.py", line 92, in ask
return get_loop().run_until_complete(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\asyncio\base_events.py", line 674, in run_until_complete
self.run_forever()
File "D:\Anaconda\Lib\asyncio\windows_events.py", line 322, in run_forever
super().run_forever()
File "D:\Anaconda\Lib\asyncio\base_events.py", line 641, in run_forever
self._run_once()
File "D:\Anaconda\Lib\asyncio\base_events.py", line 1948, in _run_once
event_list = self._selector.select(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\asyncio\windows_events.py", line 445, in select
self._poll(timeout)
File "D:\Anaconda\Lib\asyncio\windows_events.py", line 774, in _poll
status = _overlapped.GetQueuedCompletionStatus(self._iocp, ms)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt