langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.01k stars 15.39k forks source link

DOC: No explanation of OPENAI_FUNCTIONS agent #6178

Closed francisjervis closed 1 year ago

francisjervis commented 1 year ago

Issue with current documentation:

This is not documented. An example is provided, with no explanation whatsoever. As such this page contributes nothing over and above the source code.

Idea or request for content:

Actually document this feature.

exiao commented 1 year ago

The feature was just released, so we can definitely cut langchain slack, but it was a struggle to figure out.

I read the source code for StructuredTools and the new functions on the OpenAI class and Langchain.schema, here's some sample code and comments explaining each portion.

The TL;DR is that you need to:

  1. define your functions as tools
  2. convert them using format_tool_to_openai_function
  3. load your HumanMessage, AIMessage, SystemMessage, and FunctionMessage if you need history
  4. call llm.predict_messages to get your response
  5. if response contains function_call, use it to call your function, then append the response as a FunctionMessage and call the llm again.
from pydantic import BaseModel, Field
from langchain.tools import StructuredTool, format_tool_to_openai_function
from my_library.utils import save_user # this is your function you are trying to get OpenAI to use
from langchain.schema import (
    HumanMessage,
    AIMessage,
    SystemMessage,
    FunctionMessage,
    BaseMessage,
)

# By defining the args_schema, you can tell OpenAI what each of the fields mean
class SaveUserInput(BaseModel):
    name: str = Field(description="The full name of the user")

# By setting up the tool, you can use the name, description, and args_schema of the tool as the inputs required for the functions
save_user_tool = StructuredTool.from_function(
  name="save_user", 
  func=save_user,
  description="Save user data to database",
  args_schema=SaveUserInput
)
tools = [save_user_tool]
functions = [format_tool_to_openai_function(t) for t in tools]
llm = OpenAIChat(
  model="gpt-3.5-turbo-0613",
  temperature=0,
)

# You can construct the messages to send to OpenAI by using the Langchain.schema classes like HumanMessage, AIMessage, and FunctionMessage. I construct the messages by hand instead of using the memory class since it is not supported yet and frankly a lot of magic and overhead that makes it more difficult to understand and use.
messages =[SystemMessage(content="You are a chatbot trying to get the name of the user so you can save it to the database")]
messages.append(<ADD YOUR MESSAGE HISTORY HERE>)

# Returns AIMessage from LLM
# AIMessage looks like: AIMessage(content='', additional_kwargs={'function_call': {'name': 'save_user', 'arguments': '{\n  "source_path": "foo",\n  "destination_path": "bar"\n}'}}, example=False)
response_message = llm.predict_messages(
  messages=messages,
  functions=functions
)

# Code below to run the function
function_json = response_message.additional_kwargs.get("function_call")
if function_json:
  # Call your function based on the data returned by the LLM
  function_name = function_json.get("name")
  arguments = json_formatting(function_json.get("arguments"))
  if function_name == "save_user":
    <INSERT YOUR CODE HERE TO CALL YOUR FUNCTION BASED ON THE NAME AND ARGS>
    response_json = <YOUR FUNCTION>
    function_msg = FunctionMessage(name=function_name, content=response_json)
    messages.append(function_msg)
    response_message = llm.predict_messages(
      messages=messages,
      functions=functions
    )
sumanthdonthula commented 1 year ago

Hey! @francisjervis

I've tried to add comments, to explain the given example in the notebook and created a Pull Request.

Meanwhile! please refer the notebook here.

https://github.com/sumanthdonthula/langchain/blob/master/docs/modules/agents/agents/examples/openai_functions_agent.ipynb

Let me know in case of any issues.

francisjervis commented 1 year ago

Cool, thanks for adding that - unfortunately the functions feature itself seems to be deeply flawed at model level. The instruction following is so subpar it's not really usable (eg ignoring "yes" | "no" and returning "true" or simply not returning JSON at all). I would strongly advise people not to migrate from "old fashioned" approaches just yet.

dosubot[bot] commented 1 year ago

Hi, @francisjervis! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue you raised was about the lack of proper documentation for the OPENAI_FUNCTIONS agent in the LangChain repository. User "exiao" provided a sample code and comments explaining each portion of the feature, and user "sumanthdonthula" added comments to the example in the notebook and created a pull request. However, you pointed out that the functions feature itself seems to be flawed at the model level and advised against migrating from "old fashioned" approaches.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your contribution to the LangChain repository, and please don't hesitate to reach out if you have any further questions or concerns.