Closed 52THANOS closed 3 months ago
Hey there, @52THANOS! I'm here to assist you with any bugs or questions you have. Let's work together to resolve this issue.
To resolve the ValueError
related to the OpenAI API key when using ollama
and a local embedding model with llama-index
, you need to ensure that the OpenAI API key is set correctly or disable the OpenAI LLM entirely. Here are the steps you can follow:
Set the OPENAI_API_KEY
environment variable:
export OPENAI_API_KEY='your_openai_api_key'
Set the API key directly in your code before initialization:
import openai
openai.api_key = 'your_openai_api_key'
Disable the OpenAI LLM by setting llm=None
:
Modify your code to disable the LLM if you do not intend to use OpenAI's models. Here is how you can do it:
from llama_index.core import Settings
from llama_index.legacy.embeddings import HuggingFaceEmbedding
from llama_index.legacy.llms import Ollama
Settings.llm = Ollama(model="llama3", request_timeout=360.0)
Settings.embed_model = HuggingFaceEmbedding("BAAI/bge-small-en-v1.5")
from llama_index.legacy import SimpleDirectoryReader
from llama_index.legacy.node_parser import SentenceSplitter
# load documents
documents = SimpleDirectoryReader(input_files=["assets/metagpt.pdf"]).load_data()
splitter = SentenceSplitter(chunk_size=1024)
nodes = splitter.get_nodes_from_documents(documents)
from llama_index.legacy import SummaryIndex, VectorStoreIndex
# Disable the LLM by setting llm=None
summary_index = SummaryIndex(nodes, embed_model=Settings.embed_model, llm=None)
vector_index = VectorStoreIndex(nodes, embed_model=Settings.embed_model, llm=None)
summary_query_engine = summary_index.as_query_engine(
response_mode="tree_summarize",
use_async=True,
llm=Settings.llm,
)
vector_query_engine = vector_index.as_query_engine(llm=Settings.llm)
from llama_index.legacy.tools import QueryEngineTool
summary_tool = QueryEngineTool.from_defaults(
query_engine=summary_query_engine,
description=(
"Useful for summarization questions related to MetaGPT"
),
)
vector_tool = QueryEngineTool.from_defaults(
query_engine=vector_query_engine,
description=(
"Useful for retrieving specific context from the MetaGPT paper."
),
)
from llama_index.legacy.query_engine.router_query_engine import RouterQueryEngine
from llama_index.legacy.selectors import LLMSingleSelector
from llama_index.core.callbacks.base import CallbackManager
query_engine = RouterQueryEngine(
selector=LLMSingleSelector.from_defaults(),
query_engine_tools=[
summary_tool,
vector_tool,
],
verbose=True
)
response = query_engine.query("What is the summary of the document?")
print(str(response))
By setting llm=None
, you disable the OpenAI LLM, which should prevent the error related to the missing OpenAI API key [1][2].
To continue the conversation, mention @dosu.
@52THANOS legacy doesn't support settings (it's a copy of llamaindex before v0.10.x)
Use proper non-legacy packages and imports
pip install llama-index-embeddings-huggingface llama-index-llms-ollama
from llama_index.core import Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.ollama import Ollama
Settings.llm = Ollama(model="llama3", request_timeout=360.0)
Settings.embed_model = HuggingFaceEmbedding("BAAI/bge-small-en-v1.5")
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, SummaryIndex
from llama_index.core.node_parser import SentenceSplitter
And so on
@52THANOS legacy doesn't support settings (it's a copy of llamaindex before v0.10.x)
Use proper non-legacy packages and imports
pip install llama-index-embeddings-huggingface llama-index-llms-ollama
from llama_index.core import Settings from llama_index.embeddings.huggingface import HuggingFaceEmbedding from llama_index.llms.ollama import Ollama Settings.llm = Ollama(model="llama3", request_timeout=360.0) Settings.embed_model = HuggingFaceEmbedding("BAAI/bge-small-en-v1.5") from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, SummaryIndex from llama_index.core.node_parser import SentenceSplitter
And so on
i figured it out. thank you. it worked now.
@52THANOS legacy doesn't support settings (it's a copy of llamaindex before v0.10.x)
Use proper non-legacy packages and imports
pip install llama-index-embeddings-huggingface llama-index-llms-ollama
from llama_index.core import Settings from llama_index.embeddings.huggingface import HuggingFaceEmbedding from llama_index.llms.ollama import Ollama Settings.llm = Ollama(model="llama3", request_timeout=360.0) Settings.embed_model = HuggingFaceEmbedding("BAAI/bge-small-en-v1.5") from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, SummaryIndex from llama_index.core.node_parser import SentenceSplitter
And so on
how can i use ollama function call in llm_index agent?
Ollama doesn't really support full action. Calling in their api, only through some raw prompting with a specific model
Use the react agent instead (I'm pretty sure I answered this in another thread somewhere, maybe that was you too)
Bug Description
ValueError:
Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY. Original error: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys
To disable the LLM entirely, set llm=None.
Version
llama-index 0.10.39 llama-index-agent-openai 0.2.5 llama-index-cli 0.1.12 llama-index-core 0.10.39.post1 llama-index-embeddings-langchain 0.1.2 llama-index-embeddings-openai 0.1.10 llama-index-indices-managed-llama-cloud 0.1.6 llama-index-legacy 0.9.48 llama-index-llms-openai 0.1.21 llama-index-multi-modal-llms-openai 0.1.6 llama-index-program-openai 0.1.6 llama-index-question-gen-openai 0.1.3 llama-index-readers-file 0.1.23 llama-index-readers-llama-parse 0.1.4
Steps to Reproduce
from llama_index.core import Settings from llama_index.legacy.embeddings import HuggingFaceEmbedding from llama_index.legacy.llms import Ollama
Settings.llm = Ollama(model="llama3", request_timeout=360.0) Settings.embed_model = HuggingFaceEmbedding("BAAI/bge-small-en-v1.5")
from llama_index.legacy import SimpleDirectoryReader from llama_index.legacy.node_parser import SentenceSplitter
load documents
documents = SimpleDirectoryReader(input_files=["assets/metagpt.pdf"]).load_data()
print(documents)
splitter = SentenceSplitter(chunk_size=1024) nodes = splitter.get_nodes_from_documents(documents)
from llama_index.legacy import SummaryIndex, VectorStoreIndex
come to this line, the erro occurred, says i need openai key
summary_index = SummaryIndex(nodes, embed_model=Settings.embed_model) vector_index = VectorStoreIndex(nodes, embed_model=Settings.embed_model) summary_query_engine = summary_index.as_query_engine( response_mode="tree_summarize", use_async=True, llm=Settings.llm,
) vector_query_engine = vector_index.as_query_engine(llm=Settings.llm)
from llama_index.legacy.tools import QueryEngineTool
summary_tool = QueryEngineTool.from_defaults( query_engine=summary_query_engine, description=( "Useful for summarization questions related to MetaGPT" ), )
vector_tool = QueryEngineTool.from_defaults( query_engine=vector_query_engine, description=( "Useful for retrieving specific context from the MetaGPT paper." ), )
from llama_index.legacy.query_engine.router_query_engine import RouterQueryEngine from llama_index.legacy.selectors import LLMSingleSelector from llama_index.core.callbacks.base import CallbackManager query_engine = RouterQueryEngine( selector=LLMSingleSelector.from_defaults(), query_engine_tools=[ summary_tool, vector_tool, ], verbose=True )
response = query_engine.query("What is the summary of the document?") print(str(response))
Relevant Logs/Tracbacks