langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.12k stars 15.21k forks source link

How to get the final output from the load_summarize_chain async run? #5176

Closed axiangcoding closed 1 year ago

axiangcoding commented 1 year ago

Discussed in https://github.com/hwchase17/langchain/discussions/5159

Originally posted by **axiangcoding** May 24, 2023 code example here: ``` async def summary(callback: BaseCallbackHandler): llm = AzureChatOpenAI( deployment_name=os.environ["OPENAI_GPT35_DEPLOYMENT_NAME"], ) text_splitter = NLTKTextSplitter(chunk_size=1000) texts = text_splitter.split_text(content) docs = [Document(page_content=t) for t in texts] chain = load_summarize_chain(llm, chain_type="map_reduce", verbose=False) await chain.arun(docs, callbacks=[callback]) ``` and callback defined here: ``` class SummaryCallback(BaseCallbackHandler): def on_chain_end(self, outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) -> Any: logger.info(f"on_chain_end: {outputs}, {run_id}, {parent_run_id}, {kwargs}") def on_tool_end(self, output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) -> Any: logger.info(f"on_tool_end: {output}, {run_id}, {parent_run_id}, {kwargs}") def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) -> Any: logger.info(f"on_llm_end: {response}, {run_id}, {parent_run_id}, {kwargs}") ``` when i test it, console shows: ``` 2023-05-24 08:42:46.143 | INFO | routers.v1.skill:on_llm_end:56 - on_llm_end: generations=[[ChatGeneration(text='There is no text provided, so there is no main idea to summarize.', generation_info=None, message=AIMessage(content='There is no text provided, so there is no main idea to summarize.', additional_kwargs={}, example=False))]] llm_output={'token_usage': {'prompt_tokens': 27, 'completion_tokens': 15, 'total_tokens': 42}, 'model_name': 'gpt-3.5-turbo'}, b9cb89c9-3e89-4335-93e9-8ac8104f9de1, 08558b5a-399c-4ff8-b64a-5856439df7e0, {} 2023-05-24 08:42:46.144 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'outputs': [{'text': 'There is no text provided, so there is no main idea to summarize.'}]}, 08558b5a-399c-4ff8-b64a-5856439df7e0, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, {} 2023-05-24 08:42:48.537 | INFO | routers.v1.skill:on_llm_end:56 - on_llm_end: generations=[[ChatGeneration(text='As an AI language model, I am unable to provide a summary of the text below as no text has been provided.', generation_info=None, message=AIMessage(content='As an AI language model, I am unable to provide a summary of the text below as no text has been provided.', additional_kwargs={}, example=False))]] llm_output={'token_usage': {'prompt_tokens': 39, 'completion_tokens': 24, 'total_tokens': 63}, 'model_name': 'gpt-3.5-turbo'}, 3471ac9f-2290-494e-a939-406bc7b5b8a1, bfe3f758-1275-4662-a553-5e4889aa3958, {} 2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, bfe3f758-1275-4662-a553-5e4889aa3958, 12bc5030-dced-4243-a841-be44fa411d03, {} 2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'output_text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, 12bc5030-dced-4243-a841-be44fa411d03, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, {} 2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'output_text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, None, {} ``` `on_chain_end` and `on_llm_end` printed several times, which one is the final output?
axiangcoding commented 1 year ago

I guess parent_run_id is None means end of chain, and its output is the final output of the chain, am i right?

dosubot[bot] commented 1 year ago

Hi, @axiangcoding! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue you opened is about determining the final output from the load_summarize_chain async run in the LangChain repository. You were unsure which output, either from on_chain_end or on_llm_end, is the final one. There hasn't been any resolution to this issue yet, and there was one comment from you asking for clarification on whether parent_run_id is None means the end of the chain and its output is the final output.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.

Thank you for your contribution to the LangChain repository!