Closed offchan42 closed 4 months ago
Definitely need more documentation here.
MESSAGES:
keyword and a tool message like {"role": "tool", "name": tool.__class__.__name__, "content": tool.fn(**tool.args)}
(this is OpenAI format)tools
list of the call_params
and then access all tools called by the model by using response.tools
(note response.tool
just grabs the first tool).response.tool.tool_message
that outputs the correct tool message (for the provider in use) so the user doesn't have to manually write the message as in number 1. We should also likely add similar functionality for normal responses (i.e. to get the assistant message to inject into chat history without having to write it manually). This would also likely make our chat history example even more compelling.One more question @offchan42:
For streaming messages when using tools, you can't tell if it will use the tool before you make the call, which makes your 1st point difficult (unless you force it to use the tool of course, in which case just make the call, but this isn't always the case).
We offer support for streaming tools as well (currently only OpenAI and Anthropic are supported). This way you can just always call stream
and then act based on the model's response. Does this make sense?
Regardless we should likely provide documentation for how one might do this with detailed examples.
I want something similar to this tool streaming, but it should allow for normal text streaming as well.
import os
from mirascope.openai import OpenAICall, OpenAICallParams, OpenAIToolStream
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
def print_book(title: str, author: str, description: str):
"""Prints the title and author of a book."""
return f"Title: {title}\nAuthor: {author}\nDescription: {description}"
class BookRecommender(OpenAICall):
prompt_template = "Please recommend some books to read."
call_params = OpenAICallParams(tools=[print_book])
stream = BookRecommender().stream()
tool_stream = OpenAIToolStream.from_stream(stream)
for tool in tool_stream:
tool.fn(**tool.args)
#> Title: The Name of the Wind\nAuthor: Patrick Rothfuss\nDescription: ...
#> Title: Dune\nAuthor: Frank Herbert\nDescription: ...
#> ...
In the above code, I saw that it's always assuming that the response from OpenAI is going to be about tools. What if OpenAI returns normal text response instead of tools? How can we handle that?
This is the API I want:
stream = BookRecommender().stream()
if is_tool_stream(stream):
tool_stream = OpenAIToolStream.from_stream(stream)
for tool in tool_stream:
tool.fn(**tool.args)
else: # stream of text instead of stream of tools
for chunk in stream:
print(chunk, end="", flush=True)
Is there a way to do something like this?
FYI, @offchan42 is my personal account and @off6atomic is my work account.
This is my attempt at trying to insert tool output into the chat history (no streaming):
from typing import Literal
from openai.types.chat import ChatCompletionMessageParam
from mirascope.openai import OpenAICall, OpenAICallParams
def get_current_weather(
location: str, unit: Literal["celsius", "fahrenheit"] = "fahrenheit"
):
"""Get the current weather in a given location."""
if "tokyo" in location.lower():
return f"It is 10 degrees {unit} in Tokyo, Japan"
elif "san francisco" in location.lower():
return f"It is 72 degrees {unit} in San Francisco, CA"
elif "paris" in location.lower():
return f"It is 22 degress {unit} in Paris, France"
else:
return f"I'm not sure what the weather is like in {location}"
class Forecast(OpenAICall):
prompt_template = """
MESSAGES: {history}
USER: {question}
"""
question: str
history: list[ChatCompletionMessageParam] = []
call_params = OpenAICallParams(model="gpt-4-turbo", tools=[get_current_weather])
# do the first call to get assistant to call the tool
forecast = Forecast(question="What's the weather in Tokyo Japan?")
response = forecast.call()
tool = response.tool
if tool:
print("Tool arguments:", tool.args)
output = tool.fn(**tool.args)
print("Tool output:", output)
# > It is 10 degrees fahrenheit in Tokyo, Japan
# curate history for the second call
history = []
history.append({"role": "user", "content": forecast.question})
history.append({"role": "assistant", "tool_calls": [tool.tool_call.model_dump()]}). # I think the way I create the dictionary here is quite verbose. There's probably a shorter and succinct way.
history.append(
{
"role": "tool",
"content": output,
"tool_call_id": tool.tool_call.id,
"name": tool.__class__.__name__,
}
)
# do the second call to get assistant response
forecast2 = Forecast(question="Is that cold or hot?", history=history)
response2 = forecast2.call()
print("Response 2:", response2)
This code is a modified version of the official Tools example. My goal is to ask the chatbot for weather, then it would call the tool, then I'd put the tool output into the chat history. Then I'd call the chatbot again to let it summarize whether the weather is cold or hot.
AFAIK, I need to insert 3 messages into the chat history:
I got error KeyError: 'content'
when running above code.
Can you help modify this example to make it work?
First, for the streaming, you can do the following (it'll iterate through the chunks twice, and we should look into optimizing this, but this should work as a stop-gap for now):
from typing import Generator
from mirascope.openai import (
OpenAICall,
OpenAICallParams,
OpenAICallResponseChunk,
OpenAIToolStream,
)
def print_book(title: str, author: str, description: str):
"""Prints the title and author of a book."""
return f"Title: {title}\nAuthor: {author}\nDescription: {description}"
class BookRecommender(OpenAICall):
prompt_template = "Please recommend some books to read."
call_params = OpenAICallParams(tools=[print_book])
def regenerate(
chunk: OpenAICallResponseChunk,
stream: Generator[OpenAICallResponseChunk, None, None],
) -> Generator[OpenAICallResponseChunk, None, None]:
yield chunk
for chunk in stream:
yield chunk
stream = BookRecommender().stream()
first_chunk = next(stream)
generator = regenerate(first_chunk, stream)
if (
first_chunk.delta.content is None
): # note content property is always string, but delta.content can be None
tool_stream = OpenAIToolStream.from_stream(generator)
for tool in tool_stream:
if tool:
output = tool.fn(**tool.args)
print(output)
# > Title: The Name of the Wind\nAuthor: Patrick Rothfuss\nDescription: ...
# > Title: Dune\nAuthor: Frank Herbert\nDescription: ...
else:
for chunk in generator:
print(chunk.content, end="", flush=True)
# > I'd be happy to recommend some books! Here are a few options: ...
We should file a separate feature request issue to push the regeneration into the library in some way.
For the tools, there's actually a bug that I just filed: #261
I will work on a fix for this asap and release in a fix version. The code in that bug outlines how to do it.
Thank you for quick and complete response! BTW, do you also need to bump the version number for this because I want to update to latest version?
Just released the fix for this in version v0.13.4
Docs are here: https://docs.mirascope.io/latest/concepts/tools_%28function_calling%29/#inserting-tools-back-into-the-chat-messages
Description
It'd be nice if there's an example code that shows how to insert tool output back into chat history for the LLM to generate a formatted response given the tool output.
This is an example dialogue:
Making the example more full-fledged: