run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.55k stars 5.02k forks source link

[Documentation]: OpenLLM #12644

Closed whoo9112 closed 5 months ago

whoo9112 commented 5 months ago

Documentation Issue Description

Openllm cannot be utilized.

code:

from vllm.model_executor.parallel_utils.parallel_state import destroy_model_parallel destroy_model_parallel() llm = OpenLLM("qwen/Qwen-7B-Chat", trust_remote_code=True) answer = llm.complete("hello")

occur error below:

/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.setattr()

ValueError: "OpenLLM" object has no field "_llm"

Documentation Link

https://docs.llamaindex.ai/en/stable/api_reference/llms/openllm/

dosubot[bot] commented 5 months ago

The error you're encountering, ValueError: "OpenLLM" object has no field "_llm", is likely due to a typo or inconsistency in the attribute naming within your OpenLLM class or when you're trying to access the _llm attribute from an instance of the OpenLLM class.

In the provided code snippet, the OpenLLM class correctly initializes an attribute named _llm in its __init__ method:

self._llm = openllm.LLM[Any, Any](
    model_id,
    model_version=model_version,
    model_tag=model_tag,
    prompt_template=prompt_template,
    system_message=system_prompt,
    backend=backend,
    quantize=quantize,
    serialisation=serialization,
    trust_remote_code=trust_remote_code,
    embedded=True,
    **attrs,
)

This means the OpenLLM object does indeed have a field named _llm, and it is initialized as an instance of openllm.LLM.

If you're encountering this error, it could be due to one of the following reasons:

  1. Access Before Initialization: You might be trying to access the _llm attribute before it is initialized. Ensure that you are not accessing _llm before calling OpenLLM's constructor.

  2. Typo in Access: There might be a typo or incorrect reference when accessing the _llm attribute. Ensure that the attribute name is spelled correctly wherever it is accessed.

  3. External Modification: If the codebase is large or involves dynamic attribute handling, it's possible that _llm is being deleted or modified outside the OpenLLM class definition. Check for any external modifications to the OpenLLM instances.

Given the provided context, the class definition and initialization of _llm are correct. If the error persists, review how and where the OpenLLM instances are being used and ensure that _llm is accessed after the object has been properly initialized.

To avoid a ValueError when using the OpenLLM class, it's essential to ensure that the class is properly initialized before calling any of its methods. The __init__ method should be the first to be called, as it sets up the necessary configuration and initializes the underlying OpenLLM model. After initialization, you can safely call the complete, chat, stream_complete, or stream_chat methods, depending on your requirements.

Here's a brief overview of the sequence:

  1. Initialization: Create an instance of OpenLLM with the required parameters.
    llm = OpenLLM(model_id="HuggingFaceH4/zephyr-7b-alpha")
  2. Usage: After initialization, you can use methods like complete, chat, stream_complete, or stream_chat.
    response = llm.complete("To infinity, and beyond")

This sequence ensures that the OpenLLM class is ready and configured correctly before attempting to perform any operations, thus avoiding potential ValueErrors related to uninitialized or misconfigured attributes.

If you're still encountering issues, could you provide more information about your environment? Specifically, the version of LlamaIndex you're using, and any other relevant details about your setup. This will help in diagnosing the issue further.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.