Open NikhilKosare opened 6 months ago
Use agent = create_sql_agent( llm=llm, db=db, prompt=full_prompt, agent_type="openai-tools", verbose=True )
and try again.
@liugddx - Thank you for your inputs!
I tried and getting below error NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: tools', 'type': 'invalid_request_error', 'param': None, 'code': None}}
I am bound to use the resources which are available on azure platform, so not sure if i could use "openai-tools" in agent_type.
this is how i am initializing the model
llm = AzureChatOpenAI(azure_deployment='abc', openai_api_key=aoai_api_key, azure_endpoint=aoai_endpoint, openai_api_version=aoai_api_version)
Hi @NikhilKosare,
You can use openai-tools with AzureOpenAI endpoint.
Use the latest langchain and langchain_openai version and initialize LLM like below
from langchain_openai import AzureChatOpenAI
os.environ["AZURE_OPENAI_API_KEY"] = " "
os.environ["AZURE_OPENAI_ENDPOINT"] = " "
llm = AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_deployment="gpt-35-turbo",
)
You can follow this article for more info https://python.langchain.com/docs/use_cases/sql/agents#using-a-dynamic-few-shot-prompt
Let me know if this works for you.
I am trying to use the posted example but have been trying to modify it and use Ollama+mistral instead of OpenAI. I am able to get it working if I do not supply a prompt when using create_sql_agent. But I get this posted error when trying to use the dynamic few shot prompt as described in the link. Any help is much appreciated.
Getting the same error!
Guys, the key to adapt it its to create the full prompt from scratch, following this format:
This is a printscreen from the original source of the langchain "create_sql_agent" method, it creates the prompt if no prompt is provided, but if it is, the prompt must be complete and in proper form, different from the "openai_tools" method that gets your prompt and insert it in the ReAct prompt somehow. When using a different Llm however, we have to build it from scratch:
You can have access to the prefix and suffix used in the create_sql_agent in here: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/mrkl/prompt.py
I suggest putting the sql prompt "system_prefix " from the tutorial (https://python.langchain.com/docs/use_cases/sql/agents/) in the prefix, but putting the "Here are some examples of user inputs and their corresponding SQL queries:" part in the final of format_instructions, and then create the few shot object like this:
example_prompt = ChatPromptTemplate.from_messages( [ ("human", "{input}"), ("ai", "{output}"), ] ) few_shot_prompt = FewShotChatMessagePromptTemplate( example_prompt=example_prompt, examples=examples, )
And after this, call it in the template using .format(): `template = "\n\n".join( [ PREFIX, "{tools}", format_instructions, few_shot_prompt.format(), SUFFIX, ] )
prompt = PromptTemplate.from_template(template)`
@langonifelipe - Thank you so much for this! I am using Ollama locally and have followed your advice, getting it to work. For the avoidance of any doubt, the prompt should be like below:
from langchain_core.prompts import (
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
PromptTemplate,
)
from langchain.agents.mrkl import prompt as react_prompt
examples = [
{"input": "List all the characters in Anna Karenina",
"query": "SELECT DISTINCT characterNames FROM novels WHERE novelName = 'Anna Karenina'"},
]
system_prefix = """
You are an agent designed to interact with a SQL database.
Given an input question, create a syntactically correct query to run, then look at the results of the query and
return the answer. Unless the user specifies a specific number of examples they wish to obtain.
You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for the relevant columns given the question.
You have access to tools for interacting with the database.
Only use the given tools. Only use the information returned by the tools to construct your final answer.
You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query
and try again.
DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
If the question does not seem related to the database, just return "I don't know" as the answer.
"""
basic_suffix = """
Begin!
Question: {input}
Thought: I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables.
{agent_scratchpad}
"""
example_prompt = ChatPromptTemplate.from_messages(
messages=[
('human', "{input}"),
('ai', "{query}")
]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
examples=examples,
example_prompt=example_prompt,
input_variables=["input",
"agent_scratchpad"],
)
format_instructions = f"{react_prompt.FORMAT_INSTRUCTIONS}\n " \
f"Here are some examples of user inputs and " \
f"their corresponding SQL queries:\n"
template = "\n\n".join(
[
system_prefix,
"{tools}",
format_instructions,
few_shot_prompt.format(),
basic_suffix
]
)
prompt = PromptTemplate.from_template(template=template)
@avisionh , I am to trying to create an sql agent with ollama with few shot learning and facing the same issue. Would you be able to share the full code?
Thank you! @avisionh @langonifelipe I sat with this a day haha but finally your template got it to work :)
@avisionh I'm also using Ollama offline with llama3 following the same prompt template, can you suggest what tool to use? I've been hitting the following error:
agent_executor = create_sql_agent(
File "/Users/yiyangwan/venv/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/base.py", line 181, in create_sql_agent
runnable=create_react_agent(llm, tools, prompt),
File "/Users/yiyangwan/venv/lib/python3.10/site-packages/langchain/agents/react/agent.py", line 117, in create_react_agent
raise ValueError(f"Prompt missing required variables: {missing_vars}")
ValueError: Prompt missing required variables: {'tool_names'}
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
ValueError Traceback (most recent call last) Cell In[32], line 1 ----> 1 agent = create_sql_agent( 2 llm=llm, 3 db=db, 4 prompt=full_prompt, 5 verbose=True 6 )
File ~\anaconda3\Lib\site-packages\langchain_community\agent_toolkits\sql\base.py:182, in create_sql_agent(llm, toolkit, agent_type, callback_manager, prefix, suffix, format_instructions, input_variables, top_k, max_iterations, max_execution_time, early_stopping_method, verbose, agent_executor_kwargs, extra_tools, db, prompt, **kwargs) 172 template = "\n\n".join( 173 [ 174 react_prompt.PREFIX, (...) 178 ] 179 ) 180 prompt = PromptTemplate.from_template(template) 181 agent = RunnableAgent( --> 182 runnable=create_react_agent(llm, tools, prompt), 183 input_keys_arg=["input"], 184 return_keys_arg=["output"], 185 ) 187 elif agent_type == AgentType.OPENAI_FUNCTIONS: 188 if prompt is None:
File ~\anaconda3\Lib\site-packages\langchain\agents\react\agent.py:97, in create_react_agent(llm, tools, prompt) 93 missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference( 94 prompt.input_variables 95 ) 96 if missing_vars: ---> 97 raise ValueError(f"Prompt missing required variables: {missing_vars}") 99 prompt = prompt.partial( 100 tools=render_text_description(list(tools)), 101 tool_names=", ".join([t.name for t in tools]), 102 ) 103 llm_with_stop = llm.bind(stop=["\nObservation"])
ValueError: Prompt missing required variables: {'tools', 'tool_names'}
Description
Create_sql_agent is throwing an error
System Info
langchain 0.1.8 langchain-community 0.0.21 langchain-core 0.1.25 langchain-experimental 0.0.52 langchain-openai 0.0.6