langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.29k stars 2.08k forks source link

Functionality for prompts based on decision tree / flow charts. #2425

Closed jsbryaniv closed 9 months ago

jsbryaniv commented 1 year ago

I am not sure if this is most appropriately an issue, a discussion, or something that belongs on the discord channel, but on the Contributing guidelines it says to open an issue for feature requests so I will start here.

I want to request functionality for decision tree prompts. I know I am not the first one to request this, but so far I havent been able to find any functionality in LangChain that supports this sort of thing. So I made a mini example to show that it wouldnt be tremendously difficult.

I created a simple example about how this could be done using 2 additional classes: PromptTree and PromptBranch. Basically it works by giving the LLM a tool which allows it to switch branches in the conversation. The user must then only supply which branches the AI has access to and a short description of that branch. I also give it an option to record specific "insights", which could in principle function as the keys for future prompts down the prompttree.

Here is my example:


# Import libraries
import ast
from pydantic.v1 import BaseModel, Field
from langchain.tools import Tool
from langchain.schema import HumanMessage, AIMessage, SystemMessage, FunctionMessage
from langchain.tools import format_tool_to_openai_function
from langchain.chat_models import ChatOpenAI

### Define PromptBranch ###

# Declare function name variable
SELECT_BRANCH = 'select_branch'
UPDATE_INSIGHT = 'update_insight'

# Create PromptTreeBranch class
class PromptBranch:
    """A branch in the PromptTree."""

    # Declare PromptBranch variables
    description = None  # Default description of the branch
    header = None       # Header prompt
    footer = None       # Footer prompt
    children = {}       # Dictionary of children branches with descriptions. Format={name: description (None for default)}
    initial_state = {}  # Initial state of the branch
    pass_info = {}      # Additional info to be passed to children
    insights = {}       # Dictionary of insights that the AI can update. Format={name: description}

    # Get branch ID
    @property
    def branch_id(self):
        """Get the branch ID."""
        return type(self).__name__

    def __init__(self, parent, **kwargs):
        """Initialize the PromptBranch."""
        self.parent = parent
        self.initialize_state(**kwargs)
        return

    def initialize_state(self, **kwargs):
        """Initialize the branch state."""
        # We allow kwargs to be passed in case the branch needs to be initialized with additional info
        self.state = {
            **self.initial_state,
            'insights': {x: None for x in self.insights.keys()}  # Initialize insights to None
        }
        return

    def __call__(self, messages):
        """Call the PromptBranch."""
        return (
            self.get_prompt(messages),
            self.get_tools(),
        )

    def get_pass_info(self):
        """Pass info to children."""
        return self.pass_info

    def get_prompt(self, messages):
        """Get the prompt."""

        # Initialze prompt
        prompt = []

        # Add preamble
        preamble = self.parent.preamble
        if preamble is not None:
            prompt.append(SystemMessage(content=preamble))

        # Add header
        header = self.get_header()
        if header is not None:
            prompt.append(SystemMessage(content=header))

        # Add messages
        prompt += messages

        # Add footer
        footer = self.get_footer()
        if footer is not None:
            prompt.append(SystemMessage(content=footer))

        # Add insights
        insights = self.get_insights()
        if insights is not None:
            prompt.append(SystemMessage(content=insights))

        # Return
        return prompt

    def get_header(self):
        """Get header."""
        return self.header

    def get_footer(self):
        """Get footer."""
        return self.footer

    def get_insights(self):
        """Get insights."""
        if len(self.insights) == 0:
            return None
        else:
            insights = f"Your insights so far are:"
            for name, state in self.state['insights'].items():
                insights += f"\n{name}: {state}"
            return insights

    def get_tools(self):
        """Get tools."""

        # Initialize tools
        tools = []

        # Add switch branch tool
        if len(self.children) > 0:
            tools.append(self._tool_switch_branch())

        # Add update insights tool
        if len(self.insights) > 0:
            tools.append(self._tool_update_insight())

        # Return
        return tools

    def _tool_switch_branch(self):
        """Create tool to select next branch."""

        # Get variables
        tool_name = SELECT_BRANCH
        children = self.children

        # Create tool function
        tool_func = self.switch_branch

        # Create tool description
        tool_description = "Select the next branch to continue the conversation. Your options are:"
        for branch_id, branch_description in children.items():
            if branch_description is None:
                branch_description = self.parent.all_branches[branch_id].description
            tool_description += f"\n{branch_id}: {branch_description}"

        # Create tool schema
        class ToolSchema(BaseModel):
            branch: str = Field(
                description="Select next branch.",
                enum=list(children.keys()),
            )

        # Create tool
        tool_obj = Tool(
            name=tool_name,
            func=tool_func,
            description=tool_description,
            args_schema=ToolSchema,
        )

        # Return
        return tool_obj

    def _tool_update_insight(self):
        """Create tool to update an insight."""

        # Get variables
        tool_name = UPDATE_INSIGHT
        insights = self.insights

        # Create tool function
        tool_func = self.update_insight

        # Create tool description
        tool_description = "Update an insight. You can choose to update any of the following insights:"
        for name, state in insights.items():
            tool_description += f"\n{name}: {state}"

        # Create tool schema
        class ToolSchema(BaseModel):
            insight: str = Field(
                description="Select insight to update.",
                enum=list(insights.keys()),
            )
            value: str = Field(
                description="New value of the insight.",
            )

        # Create tool
        tool_obj = Tool(
            name=tool_name,
            func=tool_func,
            description=tool_description,
            args_schema=ToolSchema,
        )

        # Return
        return tool_obj

    def switch_branch(self, branch):
        """Switch to a new branch."""

        # Switch parent tree branch
        self.parent.branch = self.parent.all_branches[branch](parent=self.parent, **self.get_pass_info())

        # Return function message
        message = FunctionMessage(
            name=SELECT_BRANCH,
            content=f"You have switched to the {branch} branch.",
            additional_kwargs={'internal_function': True},
        )
        return message

    def update_insight(self, insight, value):
        """Update an insight."""

        # Update insight
        self.state['insights'][insight] = value

        # Return function message
        message = FunctionMessage(
            name=UPDATE_INSIGHT,
            content=f"You have updated the {insight} insight to {value}.",
            additional_kwargs={'internal_function': True},
        )
        return message

### Define PromptTree ###

# Create PromptTree class
class PromptTree:
    """A decision tree for prompting the AI."""

    # Declare PromptTree variables
    preamble = None      # System prompt to put before each branch prompt
    first_branch = None  # Name of first branch to start the prompt tree
    all_branches = {}    # Dictionary of all branches in the tree. Format={branch_id: branch_class}

    def __init__(self):
        """Initialize the PromptTree branch state."""
        self.branch = self.all_branches[self.first_branch](parent=self)
        return

    def __call__(self, messages, **kwargs):
        """Call the PromptTree."""
        return self.branch(messages, **kwargs)

    def get_state(self):
        """Get the current branch state."""
        return {
            'branch_id': self.branch.branch_id,
            'branch_state': self.branch.state,
        }

    def load_state(self, state):
        """Load a branch from the state."""
        branch_id = state['branch_id']
        branch_state = state['branch_state']
        if branch_id not in self.all_branches:
            raise ValueError(f"Unknown branch_id: {branch_id}")
        self.branch = self.all_branches[branch_id](parent=self)
        self.branch.state = branch_state
        return

### Define TreeAgent ###

# Create TreeAgent class
class TreeAgent:
    """An AI agent based on the PromptTree class."""

    def __init__(self, tree, model):
        """Initialize the TreeAgent."""
        self.tree = tree
        self.model = model
        return

    def __call__(self, messages, **kwargs):
        """Call the TreeAgent."""
        return self.respond(messages, **kwargs)

    def get_state(self):
        """Get the current state of the TreeAgent."""
        return self.tree.get_state()

    def load_state(self, state):
        """Load the state of the TreeAgent."""
        self.tree.load_state(state)
        return

    def respond(self, messages):
        """Respond to the messages."""

        # Initialize new messages
        new_messages = []

        # Loop until no function calls
        while True:

            # Get the prompt
            prompt, tools = self.tree(messages+new_messages)

            # Get the response
            funcs = [format_tool_to_openai_function(t) for t in tools]
            response = self.model.predict_messages(prompt, functions=funcs)
            new_messages.append(response)

            # Check for function calls
            if 'function_call' in new_messages[-1].additional_kwargs:
                # Get function call
                func_call = new_messages[-1].additional_kwargs['function_call']
                func_name = func_call['name']
                func_args = ast.literal_eval(func_call['arguments'])
                func = [x.func for x in tools if x.name == func_name][0]

                # Call the function
                func_response = func(**func_args)
                new_messages.append(func_response)
                continue
            else:
                # If no function call, break
                break

        # Return
        return new_messages

####################################################################################################
####################################################################################################

### EXAMPLE ###

# Create PromptBranches
class BranchA(PromptBranch):
    header = "You love icecream, but you only like vanilla icecream."
    footer = "If you choose to respond make sure you mention icecream."
    description = "A Branch to talk about icecream."
    children = {
        'BranchB': 'If someone mentions anything fancy, be sure to switch to this branch.',
        'BranchC': None,
    }
class BranchB(PromptBranch):
    header = "You love fine wines, but only if they are over 10 years old."
    footer = "If you choose to respond make sure you mention wine."
    description = "A Branch to talk about wine."
    children = {
        'BranchA': None,
        'BranchC': None,
    }
class BranchC(PromptBranch):
    header = "You love going to the beach all the time no matter what."
    footer = "If you choose to respond make sure you mention that you love the beach."
    description = "A Branch to talk about the beach."
    children = {
        'BranchA': None,
        'BranchB': None,
    }

# Create PromptTree
class MyPromptTree(PromptTree):
    preamble = "You are an AI who is obsessed with a few things."
    first_branch = 'BranchA'
    all_branches = {
        'BranchA': BranchA,
        'BranchB': BranchB,
        'BranchC': BranchC,
    }

### CONVERSATION ###

# Initialize the AI
llm = ChatOpenAI(model="gpt-3.5-turbo-0613")
tree = MyPromptTree()
agent = TreeAgent(tree, llm)

# Create sample conversation
messages = []
while True:

    # Human input
    user_message = input("You: ")
    messages += [HumanMessage(content=user_message)]

    # AI response
    new_messages = agent(messages)
    for m in new_messages:
        print("AI:", m)
    messages += new_messages

I know its a lot of code to unpack, so maybe the best way to see how it works would just be to try it. The AI starts talking about icecream, but if you change the topic to the beach or "something fancy" you will see how the AI is able to switch to a new conversation branch where they could, in principle, have access to different information in their prompts. I know this is a silly example, but I hope you take it more as a proof-of-principle.

I would love to see something like this implemented in the core of LangChain. I have no experience contributing to open source projects so Im not sure where to start with this.

Please let me know what you think!

dosubot[bot] commented 1 year ago

🤖

Hello,

Thank you for your interest in contributing to the LangChain project. Your idea of adding decision tree prompts functionality sounds like a great addition to the project.

Based on the current structure of the LangChain codebase, you would need to create a new module for your feature. This would involve creating a new directory with an index file for your feature, and adding an entry point for your feature in the entrypoints object. You can use the create-entrypoints.js script to generate the necessary files and update the configuration files for your new module.

Here's a simplified example of how you might structure your new feature:

// Adding a new feature
const entrypoints = {
  ...,
  prompt_tree: "prompt_tree/index",
  ...
};

Once you've created your new module and added it to the entrypoints object, you can start implementing your feature. Make sure to follow the coding standards and practices of the LangChain project, and use the provided tooling for dependency management, linting, formatting, testing, and documentation.

When you're ready to contribute your feature, you should follow the "fork and pull request" workflow. This involves creating a fork of the LangChain repository, making your changes in your fork, and then submitting a pull request to the LangChain repository. Make sure to provide a detailed description of your changes in your pull request.

Here's a simplified example of how you might contribute your feature:

  1. Fork the LangChain repository.
  2. Clone your fork to your local machine.
  3. Create a new branch for your feature.
  4. Make your changes in your branch.
  5. Push your changes to your fork.
  6. Submit a pull request to the LangChain repository.
# Fork the LangChain repository on GitHub
# Clone your fork to your local machine
git clone https://github.com/yourusername/langchainjs.git
# Create a new branch for your feature
git checkout -b my-feature
# Make your changes in your branch
# ...
# Push your changes to your fork
git push origin my-feature
# Submit a pull request to the LangChain repository on GitHub

Please make sure to read and follow the contribution guidelines for the LangChain project. These guidelines provide detailed instructions on how to contribute to the project, including how to use the provided tooling and how to submit a pull request.

If you have any questions or need further assistance, feel free to ask. We're here to help!

Best, Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

jsbryaniv commented 1 year ago

@dosu-bot

How can I talk to a human admin or contributor to discus where in the library this would be most appropriate?

dosubot[bot] commented 1 year ago

🤖

Hello,

Thank you for your interest in contributing to the LangChain project. Your feature of adding decision tree prompts functionality sounds like a great addition to the project. However, as LangChain is a Javascript/Typescript framework, we would need to have this feature implemented in Javascript or Typescript to be integrated into the existing codebase.

Given the current structure of the LangChain codebase, it seems like your feature would fit best under the "Prompts" component, as it deals with creating new types of prompts for the language models. However, the exact placement and integration would depend on the specifics of your implementation.

To discuss this further with a human admin or contributor, you can follow these steps:

  1. Open a new issue in the LangChain repository describing your feature and your questions about integrating it into the codebase. Be sure to provide as much detail as possible about your implementation and how you believe it could fit into the existing structure.

  2. Once the issue is open, you can request a review from a specific contributor or admin by mentioning them in the issue using the "@" symbol followed by their GitHub username. If you're not sure who to ask for a review, you can leave it open and someone should get back to you.

  3. You can also use the "assignees" option on the right side of the issue page to assign the issue to a specific contributor or admin. This will notify them of the issue and request their review.

  4. Be sure to follow the contribution guidelines as outlined in the CONTRIBUTING.md file in the repository. This includes following the "fork and pull request" workflow, discussing new abstractions in an issue before implementing them, and using the provided tooling for dependency management, linting, and testing.

I hope this helps guide you in the right direction. If you have any other questions or need further clarification, feel free to ask.

Best, Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

khanguslee commented 1 year ago

I can't be of much help, but I was also looking at a similar feature using LangchainJS since it's the most 'mature' AI framework in the JS ecosystem currently. However, after much searching I could not find anything.

Within the python framework, LLMFlows seems to implement the use of 'prompt flow/trees' very well and could be something that LangChain JS picks up too.

jsbryaniv commented 1 year ago

@khanguslee I would really like to contribute to incorporate this as a feature. Do you know who I should talk to about this? Or do I just build a prototype and then submit it as a fork and push, with an explanation?

dosubot[bot] commented 9 months ago

Hi, @jsbryaniv! I'm Dosu, and I'm here to help the langchainjs team manage our backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you are requesting functionality for decision tree prompts in LangChain. You provided a mini example using two additional classes, PromptTree and PromptBranch, to demonstrate how this could be implemented. You were unsure of how to contribute to open source projects, and I provided guidance on creating a new module and following the "fork and pull request" workflow. You also asked how to discuss the placement of the feature with a human admin or contributor, and I provided steps for initiating that discussion. Additionally, khanguslee mentioned a similar feature in the Python framework LLMFlows that could be a reference for LangChain JS. You expressed interest in contributing and asked for guidance on how to proceed.

Before we close this issue, I wanted to check with you if it is still relevant to the latest version of the langchainjs repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.

Thank you for your contribution and interest in LangChain JS! Let us know if you have any further questions or need any assistance.