pipecat-ai / pipecat

Open Source framework for voice and multimodal conversational AI
BSD 2-Clause "Simplified" License
3.04k stars 245 forks source link

OpenAILLMContext has no attribute 'append' when used with LangchainProcessor #361

Open agilebean opened 1 month ago

agilebean commented 1 month ago

Current Code

Used the pipecat example code here to define the context and pass it to OpenAILLMContext.

        messages = [
            {
                "role": "system",
                "content": "You are a helpful LLM...",
            },
        ]

        context = OpenAILLMContext(messages, tools)
        tma_in = LLMUserContextAggregator(context)
        tma_out = LLMAssistantContextAggregator(context)

Expected Behavior

The system message is passed to the LLM when instantiated by LangchainProcessor the same way as by OpenAILLMService. If so, the system role would be recorded as follows:

Generating chat: [{"role": "system", "content": "Role:\nYou are an experienced ...

Current Behavior

At the first invocation of the LLM, this error is thrown:

AttributeError: 'OpenAILLMContext' object has no attribute 'append'

with traceback

.../python3.12/site-packages/pipecat/processors/aggregators/llm_response.py", line 146, in _push_aggregation
    self._messages.append({"role": self._role, "content": self._aggregation})

Caveat

Further testing confirmed: This error occurs when OpenAILLMContext used with LangchainProcessor. In contrast, using an LLM instantiated directly from the API works:

            llm = OpenAILLMService(
                api_key=os.getenv("OPENAI_API_KEY"),
                model="gpt-4o")

            context = OpenAILLMContext(messages=messages)

            tma_in = LLMUserResponseAggregator(messages)
            tma_out = LLMAssistantResponseAggregator(messages)

           async def on_first_participant_joined(transport, participant):
                transport.capture_participant_transcription(participant["id"])
                # chain.set_participant_id(participant["id"])

                id = participant["id"]

                time.sleep(1.5)

                print(f"Context is: {context}")
                await task.queue_frames([OpenAILLMContextFrame(context)])
agilebean commented 1 month ago

Just found out that

await task.queue_frames([OpenAILLMContextFrame(context)])

doesn't have any response at all if the LLM is instantiated as LangchainProcessor.

In conclusion, the above error message is generated most likely by

LLMUserContextAggregator(context)
LLMAssistantContextAggregator(context)