Future-House / paper-qa

High accuracy RAG for answering questions from scientific documents with citations
Apache License 2.0
6k stars 561 forks source link

LLM #481

Open FahaxikiHoney opened 1 week ago

FahaxikiHoney commented 1 week ago

from paperqa import Settings, ask import os os.environ["OPENAI_API_KEY"] = "EMPTY" local_llm_config = { "model_list": [ { "model_name": "ollama/llama3", "litellm_params": { "model": "ollama/llama3", "api_base": ""https://ap" } } ] }

answer = ask( "What manufacturing challenges are unique to bispecific antibodies?", settings=Settings( llm="ollama/llama3", llm_config=local_llm_config, summary_llm="ollama/llama3", summary_llm_config=local_llm_config, ), )

I want to change the GPT model to llama3, the code is as above, but an error occurred when running it, and the error is as follows.

raise client_error(req.connection_key, exc) from excaiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:11434 ssl:default [远程计算机拒绝网络连接。]

Received Model Group=ollama/llama3 Available Model Group Fallbacks=None LiteLLM Retried: 2 times, LiteLLM Max Retries: 3

dosubot[bot] commented 1 week ago

I found a similar unsolved discussion that might be relevant to your issue: Local LLM issue [1].

The error you're encountering, ClientConnectorError: Cannot connect to host localhost:11434, suggests that the application is trying to connect to a local server that is either not running or not accessible. Here are some steps to troubleshoot and resolve this issue:

  1. Check if the Local Server is Running: Ensure that the server hosting the ollama/llama3 model is running and accessible at localhost:11434. You can verify this by trying to access the server directly via a web browser or using a tool like curl.

  2. Verify API Base URL: The api_base parameter in your configuration is currently set to an empty string. You need to set it to the correct URL of the server hosting the ollama/llama3 model. For example:

    "api_base": "http://localhost:11434/v1"
  3. Network Configuration: Ensure that there are no firewall rules or network configurations blocking the connection to localhost:11434.

  4. SSL Configuration: If the server requires SSL, ensure that the SSL certificates are correctly configured. If SSL is not required, you can disable it by setting the appropriate parameters in your configuration.

Here is an updated version of your code with the api_base parameter set:

from paperqa import Settings, ask
import os

os.environ["OPENAI_API_KEY"] = "EMPTY"

local_llm_config = {
    "model_list": [
        {
            "model_name": "ollama/llama3",
            "litellm_params": {
                "model": "ollama/llama3",
                "api_base": "http://localhost:11434/v1"
            }
        }
    ]
}

answer = ask(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm="ollama/llama3",
        llm_config=local_llm_config,
        summary_llm="ollama/llama3",
        summary_llm_config=local_llm_config,
    ),
)

If the issue persists, you might want to check the server logs for any errors or additional information that could help diagnose the problem.

To continue talking to Dosu, mention @dosu.