run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.83k stars 4.75k forks source link

[Question]: Joint QA Summary Query Engine no openAI #12749

Open Aekansh-Ak opened 3 months ago

Aekansh-Ak commented 3 months ago

Question Validation

Question

Code-:

llm = HuggingFaceLLM( model_name=model_id, tokenizer_name=model_id, query_wrapper_prompt=PromptTemplate(PROMPT), context_window=3900, max_new_tokens=256, model_kwargs={"torch_dtype": torch.float16},

tokenizer_kwargs={},

generate_kwargs={"temperature": 0.3,  "top_p": 0.90},
device_map="auto",

)

Settings.llm = llm Settings.embed_model = embeddings Settings.context_window=4096, Settings.num_output=1500, Settings.chunk_overlap_ratio=0.3,

vsi = VectorStoreIndex.from_vector_store( vector_store = vector_store )

query_engine_builder = QASummaryQueryEngineBuilder( llm=llm, )

Error-:

Traceback (most recent call last): File "/home/seceon/opensearch_lm_index/model2.py", line 123, in query_engine_builder = QASummaryQueryEngineBuilder( File "/usr/local/lib/python3.10/site-packages/llama_index/core/composability/joint_qa_summary.py", line 77, in init self._service_context = service_context or ServiceContext.from_defaults() File "/usr/local/lib/python3.10/site-packages/deprecated/classic.py", line 285, in wrapperfunction return wrapped(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/llama_index/core/service_context.py", line 187, in from_defaults llm_predictor = llm_predictor or LLMPredictor( File "/usr/local/lib/python3.10/site-packages/llama_index/core/service_context_elements/llm_predictor.py", line 108, in init self._llm = resolve_llm(llm, callback_manager=callback_manager) File "/usr/local/lib/python3.10/site-packages/llama_index/core/llms/utils.py", line 48, in resolve_llm raise ValueError( ValueError:


Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY. Original error: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys

To disable the LLM entirely, set llm=None.


This time I have defined QASummaryQueryEngineBuilder after LLM, still I am getting this issue.

dosubot[bot] commented 3 months ago

The error you're encountering suggests that the LlamaIndex framework is attempting to use an OpenAI model by default, which requires an API key. Since you're not using OpenAI's models, you need to explicitly specify the model you're using or disable the LLM to avoid this error.

To resolve this, ensure you're specifying your model correctly when initializing or calling functions related to the LLMPredictor. If you're using a specific model, such as a local LlamaCPP model, you should specify the model path directly, like so:

llm = resolve_llm("local:/path/to/your/model")

Make sure the path is correct and that you have the necessary dependencies installed for your model.

Alternatively, if you want to disable the LLM entirely (for example, for testing purposes), you can set llm=None to use a mock model, bypassing the need for an OpenAI API key:

llm = resolve_llm(None)

This approach uses a MockLLM, allowing you to proceed without specifying an API key.

For more detailed guidance, refer to the documentation or source code of the LlamaIndex framework, particularly around the resolve_llm function, to ensure your setup aligns with the expected configuration. If the issue persists, providing more details about your setup and the steps leading to the error could help in identifying a more specific solution.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
Aekansh-Ak commented 3 months ago

NameError: name 'resolve_llm' is not defined