Open mlindemu opened 2 months ago
Also experiencing this, still using work around: [msg.to_dict() for msg in chat_history.messages]
I think this is an issue with pydantic v2 as per https://github.com/pydantic/pydantic/issues/7713 and https://github.com/openai/openai-python/issues/1306
reproduce below using MathPlugin with some hackery to attempt to fix history so can be dumped (tho i've no idea what i'm doing).
It looks to be messages with inner_content=ChatCompletion() and then a subsequent serialization error with message items of FunctionResultContent and KernelParameterMetadata (depending on plugin)
#!/usr/bin/env python
import asyncio
import os
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
from semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion import AzureChatCompletion
from semantic_kernel.contents import (
AuthorRole,
ChatHistory,
ChatMessageContent,
FunctionResultContent
)
from semantic_kernel.core_plugins import MathPlugin
from semantic_kernel.kernel import Kernel
def try_dump(history):
msg = None
try:
history.model_dump_json()
msg = 'succeeded'
except Exception as e:
msg = str(e)
print(f'model_dump_json: {msg}')
return msg
async def main():
kernel = Kernel()
kernel.add_service(AzureChatCompletion(
service_id='agent',
deployment_name=os.getenv('AZURE_OPENAI_DEPLOYMENT'),
api_key=os.getenv('AZURE_OPENAI_API_KEY'),
endpoint=os.getenv('AZURE_OPENAI_ENDPOINT')
))
kernel.add_plugin(plugin=MathPlugin(), plugin_name='math')
settings = kernel.get_prompt_execution_settings_from_service_id(
service_id='agent'
)
settings.function_choice_behavior = FunctionChoiceBehavior.Required()
agent = ChatCompletionAgent(
service_id='agent',
kernel=kernel,
name='agent',
instructions='you are an agent that can use tools to respond to the prompt',
execution_settings=settings
)
history = ChatHistory()
user_msg = 'add 1 + 2'
history.add_message(
ChatMessageContent(role=AuthorRole.USER, content=user_msg)
)
print(f'> user: {user_msg}')
agent.invoke(history)
async for content in agent.invoke(history=history):
if content.role != AuthorRole.TOOL:
print(f'< {content.role}: {content.content}')
assert try_dump(history) == (
"Error serializing to JSON: TypeError: 'MockValSer' "
+ "object cannot be converted to 'SchemaSerializer'"
)
# from rich.pretty import pprint; pprint(history)
# hack inner_content
for msg in history.messages:
if hasattr(msg, 'inner_content'):
msg.inner_content = None
assert try_dump(history) == "Unable to serialize unknown type: <class 'type'>"
# hack FunctionResultContent.inner_content
for msg in history.messages:
for item in msg.items:
if isinstance(item, FunctionResultContent):
item.inner_content = None
assert try_dump(history) == 'succeeded'
if __name__ == "__main__":
asyncio.run(main())
output
> user: add 1 + 2
< assistant: 1 + 2 equals 3.
model_dump_json: Error serializing to JSON: TypeError: 'MockValSer' object cannot be converted to 'SchemaSerializer'
model_dump_json: Unable to serialize unknown type: <class 'type'>
model_dump_json: succeeded
Any work-arounds for this one? I was thinking of using this to store it in a redis cache so that I can manage the memory for each session. Now that this is a known bug, I am thinking of maintaining a custom chat history.
Describe the bug Using the new FunctionChoiceBehavior.Auto() class for my execution_settings with GPT-4o. Works great! However, an instance of ChatHistory.serialize() method raises an exception
semantic_kernel.exceptions.content_exceptions.ContentSerializationError: Unable to serialize ChatHistory to JSON: Error serializing to JSON: TypeError: 'MockValSer' object cannot be converted to 'SchemaSerializer'
Parsing through each message in the ChatHistory, it looks like it can't serialize any message that relates to calling a Function. Example message extracted from the Debugger:
To Reproduce Steps to reproduce the behavior:
Expected behavior Standard JSON should be returned by the serialize() method
Screenshots N/A
Platform
Additional context Possibly related to Issue #7340