Open fahdmirza opened 2 months ago
Hey @fahdmirza! I'm here to help you with any bugs, questions, or contributions. Let's tackle this together!
To use Version 5 with LiteLLM and Ollama, follow these steps:
Install PaperQA2:
pip install paper-qa
Set up the environment:
export OPENAI_API_KEY=sk-...
Use the CLI to ask a question:
pqa ask 'What manufacturing challenges are unique to bispecific antibodies?'
Example Python code for integration:
from paperqa import Settings, ask
answer = ask(
"What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(temperature=0.5, paper_directory="my_papers"),
)
print(answer.formatted_answer)
This example demonstrates how to install PaperQA2, set up the necessary environment variables, and use both the CLI and Python code to ask questions using LiteLLM [1].
To continue talking to Dosu, mention @dosu.
That doesnt give the answer. Please disable this bot as its annoying and very misleading.
Same question here: I am trying this :
from paperqa import Settings, ask
local_llm_config = dict(
model_list= dict(
model_name="llama3",
litellm_params=dict(
model="ollama/llama3.1",
api_base="http://localhost:11434",
temperature=0.1,
frequency_penalty=1.5,
max_tokens=512,
),
)
)
answer = ask(
"What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm="llama3",
llm_config=local_llm_config,
summary_llm="llama3",
summary_llm_config=local_llm_config,
),
)
I am getting an error TypeError: string indices must be integers, not 'str'
Traceback (most recent call last):
File "/path/paper2.py", line 16, in <module>
answer = ask(
^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/agents/__init__.py", line 92, in ask
return get_loop().run_until_complete(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bengonzalez/.pyenv/versions/3.11.9/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/agents/main.py", line 81, in agent_query
response = await run_agent(docs, query, agent_type, **env_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/agents/main.py", line 206, in run_agent
answer, agent_status = await run_fake_agent(query, docs, **env_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/agents/main.py", line 238, in run_fake_agent
_, tools = await env.reset()
^^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/agents/env.py", line 134, in reset
self.state, self.tools = self.make_initial_state_and_tools()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/agents/env.py", line 124, in make_initial_state_and_tools
self.tools = settings_to_tools(
^^^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/agents/env.py", line 40, in settings_to_tools
llm_model = llm_model or settings.get_llm()
^^^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/settings.py", line 497, in get_llm
return LiteLLMModel(
^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/pydantic/main.py", line 209, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/llms.py", line 392, in maybe_set_config_attribute
and len({m["model_name"] for m in data["config"]["model_list"]}) > 1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/paperqa/llms.py", line 392, in <setcomp>
and len({m["model_name"] for m in data["config"]["model_list"]}) > 1
~^^^^^^^^^^^^^^
TypeError: string indices must be integers, not 'str'
Maybe their documentation is not perfect yet, it seems that it's expecting a list instead of a dict...
I managed to make this work:
from paperqa import Settings, ask
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3.1",
"litellm_params": {
"model": "ollama/llama3.1",
"api_base": "http://localhost:11434"
}
}
]
}
answer = ask(
"What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm="ollama/llama3.1",
llm_config=local_llm_config,
summary_llm="ollama/llama3.1",
summary_llm_config=local_llm_config,
),
)
I am unable to make it work locally. I am using a llamafile which works with litellm but not with paperqa. The code for using only litellm is:
import os
from litellm import completion
os.environ["OPENAI_API_KEY"] = "i-am-not-used-but-must-be-here"
messages = [{"content": "Write a limerick about ClickHouse", "role": "user"}]
response = completion(
model="command-nightly",
messages=messages,
api_base="http://localhost:11434/v1",
custom_llm_provider="openai"
)
print(response.choices[0].message.content)
The code that doesn't work with paperqa is:
from paperqa import Settings, ask
import os
os.environ["OPENAI_API_KEY"] = "i-am-not-used-but-must-be-here"
local_llm_config = {
"model_list": [
{
"model_name":"one",
"litellm_params": {
"model": "commander",
"api_base": "http://localhost:11434/v1",
"custom_llm_provider": "openai"
}
}
]
}
answer = ask(
"What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm="one",
llm_config=local_llm_config,
summary_llm="one",
summary_llm_config=local_llm_config,
),
)
Error given:
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=one
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
If I put something like "openai/local" as the llm. It will complain that the authorization key is not correct. How do I fix this?
I managed to make this work:
from paperqa import Settings, ask local_llm_config = { "model_list": [ { "model_name": "ollama/llama3.1", "litellm_params": { "model": "ollama/llama3.1", "api_base": "http://localhost:11434" } } ] } answer = ask( "What manufacturing challenges are unique to bispecific antibodies?", settings=Settings( llm="ollama/llama3.1", llm_config=local_llm_config, summary_llm="ollama/llama3.1", summary_llm_config=local_llm_config, ), )
It worked, and I did, but you need to run the local model with oolama first. You can search in your browser for: How to run native models with ollama. Execute this code once the model is running Thanks to the brother who provided the method You're welcome to ask me. I'm free these days
I managed to make this work:
from paperqa import Settings, ask local_llm_config = { "model_list": [ { "model_name": "ollama/llama3.1", "litellm_params": { "model": "ollama/llama3.1", "api_base": "http://localhost:11434" } } ] } answer = ask( "What manufacturing challenges are unique to bispecific antibodies?", settings=Settings( llm="ollama/llama3.1", llm_config=local_llm_config, summary_llm="ollama/llama3.1", summary_llm_config=local_llm_config, ), )
It worked, and I did, but you need to run the local model with oolama first. You can search in your browser for: How to run native models with ollama. Execute this code once the model is running Thanks to the brother who provided the method You're welcome to ask me. I'm free these days
Hi,can you help me to see the issue?
from paperqa import Settings, ask import os os.environ["OPENAI_API_KEY"] = "EMPTY" local_llm_config = { "model_list": [ { "model_name": "ollama/llama3", "litellm_params": { "model": "ollama/llama3", "api_base": ""https://ap" } } ] }
answer = ask( "What manufacturing challenges are unique to bispecific antibodies?", settings=Settings( llm="ollama/llama3", llm_config=local_llm_config, summary_llm="ollama/llama3", summary_llm_config=local_llm_config, ), )
I want to change the GPT model to llama3, the code is as above, but an error occurred when running it, and the error is as follows.
raise client_error(req.connection_key, exc) from excaiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:11434 ssl:default [远程计算机拒绝网络连接。]
Received Model Group=ollama/llama3 Available Model Group Fallbacks=None LiteLLM Retried: 2 times, LiteLLM Max Retries: 3
Hi, Is there any complete example as how to use this version 5 with litellm?
I have seen your following comment on various issues with ollama based models but the documentation doesn't say as how to actually use it? Once litellm is installed and integrates with ollama, how do we use it with paperqa?
Please advise. Thanks.