Closed RamlahAziz closed 4 weeks ago
I am observing the same issue. When I tracked the API calls by GraphSignal, it really seems that the completion provided by GTP-3.5-turbo is only Action: list_tables_sql_db, ''
and is missing the "Action input:" section, which causes parser error. It does not concern DaVinci model.
When the same prompt is passed to LLM via OpenAI Playground webapp, it results in complete and correct answer (both davinci and GPT-3.5-turbo).
I have the same problem.
I have the same problem. Do you suggest moving to da-vinci?
same problem. looking for a solution to use GPT 4 with SQL Toolkit
I ran into this today too. Setting up a SQLDatabaseChain and running my own interactive prompt around that seems to work fine, but the SQL agent is throwing the same kinds of errors as the posters above.
I think it doesn't make much sense now to invest time in fixing this. Instead it should be implemented using new function calling API.
I just ended up making my own chain using OpenAI chat completion.
I was testing the code again today and the recent updates to langchain seem to have fixed this issue for me at least.
For those that are still facing it, someone suggested passing handle_parsing_errors=true
to the create_sql_agent
function here
I'm getting this same error while trying to use a HuggingFace model - does anybody know if the create_sql_agent
works with non-OpenAI models?
I did my best to look through the docs and code and don't see why it wouldn't work with other integrations, but thought I'd double check. Thanks!
I am facing the same issue with AzureOpenAI. Below is the code snippet. Tried using some of the recommendations above but they don't seem to fix the issue.
Python : 3.10.6 Langchain : 0.0.266
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: Action: sql_db_list_tables, ''
db = SQLDatabase.from_uri(url_object) llm = AzureOpenAI(model_name="gpt-35-turbo",deployment_name="gpt35",temperature=0, verbose=True) toolkit=SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True, handle_parsing_errors=True )
agent_executor.run("List the total sales per country. Which country's customers spent the most?")
I see the same error when using the fine-tuned gpt 3.5 model with create_sql_agent. Was anyone able to fix this?
Im also facing this issue at random, @RamlahAziz suggestion helps avoid the code from crashing so that is very welcome, but nevertheless it's not a real fix
Hi, @RamlahAziz
I'm helping the LangChain team manage their backlog and am marking this issue as stale. It seems that recent updates to Langchain have resolved the parsing error you were encountering when using GPT-4 or GPT-3.5 models with the SQL Database Agent. Additionally, passing handle_parsing_errors=true
to the create_sql_agent
function has been suggested as a workaround.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!
I am also facing this issue
@baskaryan, could you please assist @Keekteng with the parsing error issue they are encountering when using GPT-4 or GPT-3.5 models with the SQL Database Agent in the LangChain repository? Thank you!
im also facing this issue
@baskaryan, could you please assist @Keekteng with the parsing error issue they are encountering when using GPT-4 or GPT-3.5 models with the SQL Database Agent in the LangChain repository? Thank you! Facing the same issue, I passed parsing errors as true and even tried increasing max_iterations to 1000, but for no luck, For my case with bigger iteration cicles one AI action is passed as an error
🤖
I'm glad to hear that the recent updates to LangChain have resolved the issue for you! It's always great to see improvements making a positive impact. Thank you for sharing the tip about passing handle_parsing_errors=true
to the create_sql_agent
function. This could be helpful for others encountering similar issues. If you have any more feedback or questions about LangChain, feel free to ask!
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Hello Everyone,
handle_parsing_errors doesn't seem to work in latest versions and I am getting following warning
/home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/base.py:138: UserWarning: Received additional kwargs {'handle_parsing_errors': True} which are no longer supported.
warnings.warn(
Any alternatives to fix?
Hello Everyone,
handle_parsing_errors doesn't seem to work in latest versions and I am getting following warning
/home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/base.py:138: UserWarning: Received additional kwargs {'handle_parsing_errors': True} which are no longer supported. warnings.warn(
Any alternatives to fix?
I think now it goes into agent_executor_kwargs argument like this:
agent_executor = create_sql_agent(llm,
db=db,
agent_type="openai-tools",
agent_executor_kwargs={'handle_parsing_errors':True},
verbose=True)
agent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools", agent_executor_kwargs={'handle_parsing_errors':True}, verbose=True)
This seems to be the latest way to handle it according to the langchain repository
agent_executor_kwargs={'handle_parsing_errors':True},
if use "toolkit" in create_sql_agent, still does not work.
agent = create_sql_agent(
llm=lm,
toolkit=sql_toolkit,
agent_type=agent_type,
max_iterations=max_iterations,
verbose=True,
agent_executor_kwargs={'handle_parsing_errors':True},
)
but if there is no toolkit, works. so this is still an issue?
System Info
python 3.10 ubuntu Ubuntu 22.04.2 LTS langchain 0.0.194
Who can help?
@eyurtsev
Information
Related Components
Reproduction
Expected behavior
I am using the SQL Database Agent to query a postgres database. I want to use gpt 4 or gpt 3.5 models in the OpenAI llm passed to the agent, but it says I must use ChatOpenAI. Using ChatOpenAI throws parsing errors.
The reason for wanting to switch models is reduced cost, better performance and most importantly - token limit. The max token size is 4k for 'text-davinci-003' and I need at least double that.
When I do, it throws an error in the chain midway saying
If I change the model to gpt-4, it runs one step then throws the error on the Thought for the next step
The error is inconsistent and sometimes the script runs normally.
streaming=True
thinking that might be the cause.Please let me know if I can provide any further information. Thanks!