Closed JoffreyLemery closed 7 months ago
🤖
Hello @JoffreyLemery,
I'm Dosu, an AI bot here to assist you with your LangChain issues, answer any questions you might have, and guide you on becoming a contributor. While we wait for a human maintainer, I'm available to help you out.
I'm currently looking into your issue and will provide you with a detailed response shortly.
🤖
Based on the information you've provided and the context from the LangChain repository, it seems like the issue you're experiencing is related to how the CustomJiraTicketWriting
and CustomJiraTicketPOST
tools are handling the input parameters.
In LangChain, the tools use the get_from_dict_or_env
function to fetch the 'email', 'url', and 'token' fields from either the input dictionary or the environment variables. If these fields are not found in either of these places, it will return None
, which is likely why you're seeing the error message indicating that these fields are required.
To resolve this issue, you need to ensure that the input dictionary contains the keys 'jira_username', 'jira_api_token', and 'jira_instance_url' with the correct values. If you are relying on environment variables, make sure the 'JIRA_USERNAME', 'JIRA_API_TOKEN', and 'JIRA_INSTANCE_URL' variables are set.
Here is an example of how you can set these fields in the input dictionary:
jira_wrapper = JiraAPIWrapper(
jira_username="your_username",
jira_api_token="your_api_token",
jira_instance_url="your_instance_url"
)
And here is how you can set the environment variables in Python:
import os
os.environ["JIRA_USERNAME"] = "your_username"
os.environ["JIRA_API_TOKEN"] = "your_api_token"
os.environ["JIRA_INSTANCE_URL"] = "your_instance_url"
Please replace "your_username", "your_api_token", and "your_instance_url" with your actual Jira username, API token, and instance URL.
This solution is based on the code found in the libs/langchain/langchain/utilities/jira.py file in the LangChain repository.
Additionally, there was a similar issue titled Tools with structured inputs are broken with new input parser logic when using REACT agents in the LangChain repository. The solution suggested there was to update the _run()
method in the SendMessageTool
class to match the action_input
schema. This change was necessary due to a recent update in LangChain that supports stronger function typing on tool inputs.
I hope this helps! If you have any other questions or need further clarification, please let me know.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot.
Sorry but the idea is not to use actual Jira_wrapper. By having my own Jira tools, it allows me to have a better control on the quality of the ticket.
My question could be sum up by : Is Openai_multi_function able to realize two action one after the other based on an unique query ?
🤖
Based on the information you've provided and the context from the LangChain repository, it seems that the OPENAI_MULTI_FUNCTIONS
agent is designed to execute multiple actions sequentially based on a single query. This is done by parsing the AI message and extracting the function calls and their arguments in the _parse_ai_message
function. Each function call and its arguments are used to create an instance of _FunctionsAgentAction
, which represents a tool invocation. These instances are then added to a list, which is returned by the _parse_ai_message
function.
However, the issue you're experiencing where the 'email', 'url', and 'token' fields are missing when using the CustomJiraTicketWriting
and CustomJiraTicketPOST
tools together in a chain might be due to the way the LLMChain
object manages the sequence of function calls. Currently, the LLMChain
object takes a list of functions and a prompt as input, and it uses the ChatOpenAI
model to generate the output. It doesn't pass the output of one function as the input to the next function in the list.
To resolve this issue, you might need to modify the LLMChain
object or create a new agent type that supports this behavior. For instance, you could alter the run
method of the LLMChain
object to pass the output of one function as the input to the next function in the list.
Here are the relevant sources for your reference:
Please note that this is just a potential solution and it might require some modifications to the LLMChain
object. If you find this solution to be correct and it's not covered in the existing documentation, I encourage you to make a pull request to update the documentation. This would greatly benefit other users who might encounter the same issue.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Hi, @JoffreyLemery
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, the issue involves an OPENAI_MULTI_FUNCTIONS
Agent struggling to retrieve data from the second input (Chain) and encountering difficulties in using the information provided in the prompt. There have been discussions around modifying the LLMChain
object or creating a new agent type to support the desired behavior, along with references to relevant sources for further investigation.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!
System Info
Name: langchain Version: 0.0.308 Macos 13.2.1
Who can help?
@hwchase17 @agola11
Information
Related Components
Reproduction
Hi everybody !
I have setup a OPENAI_MULTI_FUNCTIONS Agent which seems to do great with a few token, but even if all the information are in the prompt, it is not able to use them for the tool input.
What is the issue :
I have two Custom tools :
I presented below, the agent use the first tools, write the ticket, inject it in the new prompt, but after that, the agent decide to use the second tool but miss some information
Expected behavior
Expected behaviour :
Actuel behaviour :
All the data are inside the input but the agent has trouble to use them.
And about the prompt
Thus, all the informations are in the Input, but i has difficulties to use them. How to handle this situation ?
Best regards