run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.99k stars 5.3k forks source link

[Bug]: Not able to find 'callback_manager_from_settings_or_context' and 'service_context' not available #15981

Open mw19930312 opened 2 months ago

mw19930312 commented 2 months ago

Bug Description

I recently upgrade llama index from 0.10.32 to 0.11.8. I encounter two errors

  1. ImportError: cannot import name 'callback_manager_from_settings_or_context' from 'llama_index.core.settings'.
  2. Unexpected keyword argument 'service_context' in function call

Related codes are provided as follows:

Version

0.11.8

Steps to Reproduce

from llama_index.core.settings import ( Settings, callback_manager_from_settings_or_context, )

@classmethod def from_args( cls, index: BaseGPTIndex, llm: LLM | None = None, response_synthesizer: BaseSynthesizer | None = None, citation_chunk_size: int = DEFAULT_CITATION_CHUNK_SIZE, citation_chunk_overlap: int = DEFAULT_CITATION_CHUNK_OVERLAP, text_splitter: TextSplitter | None = None, citation_qa_template: BasePromptTemplate = CITATION_QA_TEMPLATE, citation_refine_template: BasePromptTemplate = CITATION_REFINE_TEMPLATE, retriever: BaseRetriever | None = None, node_postprocessors: list[BaseNodePostprocessor] | None = None, response_mode: ResponseMode = ResponseMode.COMPACT, use_async: bool = False, streaming: bool = False, metadata_mode: MetadataMode = MetadataMode.NONE, kwargs: any, ) -> "CustomCitationQueryEngine": """Initialize a CitationQueryEngine object.""" retriever = retriever or index.as_retriever(kwargs)

    response_synthesizer = response_synthesizer or get_response_synthesizer(
        llm=llm,
        service_context=index.service_context,
        text_qa_template=citation_qa_template,
        refine_template=citation_refine_template,
        response_mode=response_mode,
        use_async=use_async,
        streaming=streaming,
    )

    return cls(
        retriever=retriever,
        llm=llm,
        response_synthesizer=response_synthesizer,
        callback_manager=callback_manager_from_settings_or_context(
            Settings, index.service_context
        ),
        citation_chunk_size=citation_chunk_size,
        citation_chunk_overlap=citation_chunk_overlap,
        text_splitter=text_splitter,
        node_postprocessors=node_postprocessors,
        metadata_mode=metadata_mode,
        user_message_id=kwargs.get("user_message_id"),
    )

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 2 months ago

I found a similar issue regarding the callback_manager_from_settings_or_context function. Unfortunately, I couldn't find a direct solution for this specific function in the repository. However, I can help you resolve the ImportError and the unexpected keyword argument service_context after upgrading llama_index from 0.10.32 to 0.11.8.

Resolving ImportError for callback_manager_from_settings_or_context

It seems that the callback_manager_from_settings_or_context function is no longer available in the new version. You will need to update your code to be compatible with the new version.

Resolving Unexpected Keyword Argument service_context

The service_context module has been moved to a legacy path for backward compatibility. Here is how you can update your imports and usage:

  1. Update Import Statements: Replace the old import paths with the new legacy paths provided for backward compatibility.

    # Old import (before upgrade)
    # from llama_index.service_context import ServiceContext
    
    # New import (after upgrade)
    from llama_index.legacy.service_context import ServiceContext
  2. Update Function Calls: Ensure that you are not using deprecated or removed functions. If callback_manager_from_settings_or_context has been removed or renamed, you need to find the new equivalent function in the updated documentation or codebase.

  3. Remove or Replace Deprecated Arguments: If service_context is no longer a valid keyword argument, you need to remove it or replace it with the correct argument as per the new version's API.

Here is an example of how you might update your code:

# Old code (before upgrade)
from llama_index.service_context import ServiceContext

# Assuming you have a function call like this:
result = some_function(service_context=my_service_context)

# New code (after upgrade)
from llama_index.legacy.service_context import ServiceContext

# Update the function call to remove or replace the deprecated argument
result = some_function()  # Adjust this line based on the new API requirements

Additionally, the ServiceContext class and related functions have been moved to the llama_index.legacy module. You can use the ServiceContext.from_defaults or ServiceContext.from_service_context methods to create or update service contexts. Here is an example of how to use these methods:

from llama_index.legacy.service_context import ServiceContext

# Create a new ServiceContext with default settings
service_context = ServiceContext.from_defaults()

# Update an existing ServiceContext
updated_service_context = ServiceContext.from_service_context(service_context)

Make sure to check the updated documentation for any additional changes or deprecations in the new version [1][2].

New Valid Keyword Arguments for get_response_synthesizer

The new valid keyword arguments for the get_response_synthesizer function in llama_index version 0.11.8 are:

Supporting evidence from the provided code:

def get_response_synthesizer(
    service_context: Optional[ServiceContext] = None,
    text_qa_template: Optional[BasePromptTemplate] = None,
    refine_template: Optional[BasePromptTemplate] = None,
    summary_template: Optional[BasePromptTemplate] = None,
    simple_template: Optional[BasePromptTemplate] = None,
    response_mode: ResponseMode = ResponseMode.COMPACT,
    callback_manager: Optional[CallbackManager] = None,
    use_async: bool = False,
    streaming: bool = False,
    structured_answer_filtering: bool = False,
    output_cls: Optional[BaseModel] = None,
    program_factory: Optional[Callable[[PromptTemplate], BasePydanticProgram]] = None,
    verbose: bool = False,
) -> BaseSynthesizer:
    """Get a response synthesizer."""

[3][4]

To continue talking to Dosu, mention @dosu.

logan-markewich commented 2 months ago

@mw19930312 the service context is completely removed in v0.11.x -- as well as helper methods related to it

Here's your code updated

from llama_index.core.settings import Settings

def from_args(
    cls,
    index: BaseGPTIndex,
    llm: LLM | None = None,
    response_synthesizer: BaseSynthesizer | None = None,
    citation_chunk_size: int = DEFAULT_CITATION_CHUNK_SIZE,
    citation_chunk_overlap: int = DEFAULT_CITATION_CHUNK_OVERLAP,
    text_splitter: TextSplitter | None = None,
    citation_qa_template: BasePromptTemplate = CITATION_QA_TEMPLATE,
    citation_refine_template: BasePromptTemplate = CITATION_REFINE_TEMPLATE,
    retriever: BaseRetriever | None = None,
    node_postprocessors: list[BaseNodePostprocessor] | None = None,
    response_mode: ResponseMode = ResponseMode.COMPACT,
    use_async: bool = False,
    streaming: bool = False,
    metadata_mode: MetadataMode = MetadataMode.NONE,
    **kwargs: any,
) -> "CustomCitationQueryEngine":
    """Initialize a CitationQueryEngine object."""
    retriever = retriever or index.as_retriever(**kwargs)

    response_synthesizer = response_synthesizer or get_response_synthesizer(
        llm=llm,
        text_qa_template=citation_qa_template,
        refine_template=citation_refine_template,
        response_mode=response_mode,
        use_async=use_async,
        streaming=streaming,
    )

    return cls(
        retriever=retriever,
        llm=llm,
        response_synthesizer=response_synthesizer,
        callback_manager=Settings.callback_manager
        citation_chunk_size=citation_chunk_size,
        citation_chunk_overlap=citation_chunk_overlap,
        text_splitter=text_splitter,
        node_postprocessors=node_postprocessors,
        metadata_mode=metadata_mode,
        user_message_id=kwargs.get("user_message_id"),
    )