Fix memory summarization. Memory summarization was broken for models that put inner_thoughts_in_kwargs.
This is because for summarization, we construct a standard userassistant message flow that does not have inner thoughts, function calls, etc. When the LLMConfig requires inner_thoughts_in_kwargs, this causes the assistant message content to be None, which triggers an error on the LLM API.
Testing:
Tested manually via CLI and added a CI test for the live gpt-4o capabilities test.
Description:
Fix memory summarization. Memory summarization was broken for models that put
inner_thoughts_in_kwargs
.Testing:
Tested manually via CLI and added a CI test for the live gpt-4o capabilities test.