Open thetechinsight opened 1 week ago
Hello, @thetechinsight! I'm here to help you with any bugs, questions, or contributions you have. Let's work together to solve this issue.
I found some similar issues and discussions that might help you diagnose and resolve the openai.AuthenticationError
you are encountering:
Similar Unsolved Discussion:
Closed Issues with Solutions:
[Bug]: openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: xxxxx****xxxx [2].
llama-index==0.9.48
, if you use AzureOpenAI, you need to set up the global service context correctly:
from llama_index import ServiceContext, set_global_service_context
service_context = ServiceContext.from_defaults( llm=AzureOpenAI(engine="gpt-4-32k", model="gpt-4-32k"), ) set_global_service_context(service_context)
General Issue with Defaulting to OpenAI:
LangchainEmbedding
class might default to using OpenAI if not properly configured. Ensure that the OllamaEmbedding
class is correctly set up to avoid this fallback.To resolve your issue, ensure that the OllamaEmbedding
model is correctly configured and that no part of your code or its dependencies is attempting to use OpenAI's services. Here is an example of how to set up the OllamaEmbedding
model:
from llama_index.legacy.embeddings.ollama_embedding import OllamaEmbedding
# Initialize the OllamaEmbedding model
ollama_embedding = OllamaEmbedding(
model_name="gemma2:2b",
base_url="http://localhost:11434", # Ensure this is the correct URL for your Ollama model
ollama_additional_kwargs={"mirostat": 0} # Optional, additional kwargs
)
# Example usage to get text embedding
text = "Sample text to embed"
embedding = ollama_embedding._get_text_embedding(text)
print(embedding)
# Example usage to get text embeddings for a list of texts
texts = ["Sample text 1", "Sample text 2"]
embeddings = ollama_embedding._get_text_embeddings(texts)
print(embeddings)
Ensure that the base_url
is correctly set to the Ollama model's host URL to avoid any unintended API calls to OpenAI [3][4].
Additionally, double-check your environment variables and configuration to ensure that no OpenAI API key is being used or required by any part of your code or its dependencies.
To continue talking to Dosu, mention @dosu.
Question Validation
Question
Hello everyone,
I've developed two RAG (Naive RAG and Advance RAG) applications using the following technologies: LLM Framework: llama-index LLM Model: Llama3.1 (8b) using Ollama Embedding Model: OllamaEmbedding
I’m encountering the following error when running the evaluation and I’m not using OpenAI API directly in my code; however, it seems that llama-index might be using OpenAI internally.
In the following code, I create a vector database and store it locally:
To evaluate RAG application, I am using Promptfoo , so here is my the retriever.py file I am using in the promptfooconfig.yaml:
Could you please help me diagnose why this error is occurring and how I might resolve it?
Thank you! Ankur