When pushing function call context we need to make sure not to add text content items that are empty.
I'm not sure if this is a regression that snuck into Pipecat code, or whether this is new model behavior we haven't seen before. But either way, this is a small but important check. (If we push an empty text content message, all inference requests subsequently fail.)
Also started testing with gemini-exp-1114. Text, audio, image, and function calling inference all work with no changes to the GoogleLLMService.
When pushing function call context we need to make sure not to add text content items that are empty.
I'm not sure if this is a regression that snuck into Pipecat code, or whether this is new model behavior we haven't seen before. But either way, this is a small but important check. (If we push an empty text content message, all inference requests subsequently fail.)
Also started testing with
gemini-exp-1114
. Text, audio, image, and function calling inference all work with no changes to theGoogleLLMService
.