Open mw19930312 opened 2 months ago
I found a similar issue regarding the callback_manager_from_settings_or_context
function. Unfortunately, I couldn't find a direct solution for this specific function in the repository. However, I can help you resolve the ImportError
and the unexpected keyword argument service_context
after upgrading llama_index
from 0.10.32 to 0.11.8.
callback_manager_from_settings_or_context
It seems that the callback_manager_from_settings_or_context
function is no longer available in the new version. You will need to update your code to be compatible with the new version.
service_context
The service_context
module has been moved to a legacy path for backward compatibility. Here is how you can update your imports and usage:
Update Import Statements: Replace the old import paths with the new legacy paths provided for backward compatibility.
# Old import (before upgrade)
# from llama_index.service_context import ServiceContext
# New import (after upgrade)
from llama_index.legacy.service_context import ServiceContext
Update Function Calls:
Ensure that you are not using deprecated or removed functions. If callback_manager_from_settings_or_context
has been removed or renamed, you need to find the new equivalent function in the updated documentation or codebase.
Remove or Replace Deprecated Arguments:
If service_context
is no longer a valid keyword argument, you need to remove it or replace it with the correct argument as per the new version's API.
Here is an example of how you might update your code:
# Old code (before upgrade)
from llama_index.service_context import ServiceContext
# Assuming you have a function call like this:
result = some_function(service_context=my_service_context)
# New code (after upgrade)
from llama_index.legacy.service_context import ServiceContext
# Update the function call to remove or replace the deprecated argument
result = some_function() # Adjust this line based on the new API requirements
Additionally, the ServiceContext
class and related functions have been moved to the llama_index.legacy
module. You can use the ServiceContext.from_defaults
or ServiceContext.from_service_context
methods to create or update service contexts. Here is an example of how to use these methods:
from llama_index.legacy.service_context import ServiceContext
# Create a new ServiceContext with default settings
service_context = ServiceContext.from_defaults()
# Update an existing ServiceContext
updated_service_context = ServiceContext.from_service_context(service_context)
Make sure to check the updated documentation for any additional changes or deprecations in the new version [1][2].
get_response_synthesizer
The new valid keyword arguments for the get_response_synthesizer
function in llama_index version 0.11.8 are:
service_context
text_qa_template
refine_template
summary_template
simple_template
response_mode
callback_manager
use_async
streaming
structured_answer_filtering
output_cls
program_factory
verbose
Supporting evidence from the provided code:
def get_response_synthesizer(
service_context: Optional[ServiceContext] = None,
text_qa_template: Optional[BasePromptTemplate] = None,
refine_template: Optional[BasePromptTemplate] = None,
summary_template: Optional[BasePromptTemplate] = None,
simple_template: Optional[BasePromptTemplate] = None,
response_mode: ResponseMode = ResponseMode.COMPACT,
callback_manager: Optional[CallbackManager] = None,
use_async: bool = False,
streaming: bool = False,
structured_answer_filtering: bool = False,
output_cls: Optional[BaseModel] = None,
program_factory: Optional[Callable[[PromptTemplate], BasePydanticProgram]] = None,
verbose: bool = False,
) -> BaseSynthesizer:
"""Get a response synthesizer."""
To continue talking to Dosu, mention @dosu.
@mw19930312 the service context is completely removed in v0.11.x -- as well as helper methods related to it
Here's your code updated
from llama_index.core.settings import Settings
def from_args(
cls,
index: BaseGPTIndex,
llm: LLM | None = None,
response_synthesizer: BaseSynthesizer | None = None,
citation_chunk_size: int = DEFAULT_CITATION_CHUNK_SIZE,
citation_chunk_overlap: int = DEFAULT_CITATION_CHUNK_OVERLAP,
text_splitter: TextSplitter | None = None,
citation_qa_template: BasePromptTemplate = CITATION_QA_TEMPLATE,
citation_refine_template: BasePromptTemplate = CITATION_REFINE_TEMPLATE,
retriever: BaseRetriever | None = None,
node_postprocessors: list[BaseNodePostprocessor] | None = None,
response_mode: ResponseMode = ResponseMode.COMPACT,
use_async: bool = False,
streaming: bool = False,
metadata_mode: MetadataMode = MetadataMode.NONE,
**kwargs: any,
) -> "CustomCitationQueryEngine":
"""Initialize a CitationQueryEngine object."""
retriever = retriever or index.as_retriever(**kwargs)
response_synthesizer = response_synthesizer or get_response_synthesizer(
llm=llm,
text_qa_template=citation_qa_template,
refine_template=citation_refine_template,
response_mode=response_mode,
use_async=use_async,
streaming=streaming,
)
return cls(
retriever=retriever,
llm=llm,
response_synthesizer=response_synthesizer,
callback_manager=Settings.callback_manager
citation_chunk_size=citation_chunk_size,
citation_chunk_overlap=citation_chunk_overlap,
text_splitter=text_splitter,
node_postprocessors=node_postprocessors,
metadata_mode=metadata_mode,
user_message_id=kwargs.get("user_message_id"),
)
Bug Description
I recently upgrade llama index from 0.10.32 to 0.11.8. I encounter two errors
Related codes are provided as follows:
Version
0.11.8
Steps to Reproduce
from llama_index.core.settings import ( Settings, callback_manager_from_settings_or_context, )
@classmethod def from_args( cls, index: BaseGPTIndex, llm: LLM | None = None, response_synthesizer: BaseSynthesizer | None = None, citation_chunk_size: int = DEFAULT_CITATION_CHUNK_SIZE, citation_chunk_overlap: int = DEFAULT_CITATION_CHUNK_OVERLAP, text_splitter: TextSplitter | None = None, citation_qa_template: BasePromptTemplate = CITATION_QA_TEMPLATE, citation_refine_template: BasePromptTemplate = CITATION_REFINE_TEMPLATE, retriever: BaseRetriever | None = None, node_postprocessors: list[BaseNodePostprocessor] | None = None, response_mode: ResponseMode = ResponseMode.COMPACT, use_async: bool = False, streaming: bool = False, metadata_mode: MetadataMode = MetadataMode.NONE, kwargs: any, ) -> "CustomCitationQueryEngine": """Initialize a CitationQueryEngine object.""" retriever = retriever or index.as_retriever(kwargs)
Relevant Logs/Tracbacks
No response