langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.71k stars 15.54k forks source link

Langchain SQL agent withAzure SQL & Azure OpenAI fails on invoke method returning Internal server error 500 #24504

Open abhinabsarkar opened 4 months ago

abhinabsarkar commented 4 months ago

Checked other resources

Example Code

I am trying to use Langchain to query Azure SQL using Azure OpenAI. The code is based on the samples provided in GitHub - Langchain to query Azure SQL using Azure OpenAI. I have already tested connectivity with Azure SQL using Langchain & it works. I also tested connectivity with Azure OpenAI using Langchain & it works as well. I am using the API version as 2023-08-01-preview as per the comment that "Azure OpenAI on your own data is only supported by the 2023-08-01-preview API version." Referred this link.

After I create the SQL agent & execute the invoke method, it fails returning internal server error & return code as 500.

import os   
from sqlalchemy.engine.url import URL  
from langchain_community.utilities import SQLDatabase  
from langchain_openai.chat_models import AzureChatOpenAI  
from langchain.agents.agent_types import AgentType  
from langchain_community.agent_toolkits.sql.base import create_sql_agent, SQLDatabaseToolkit
from azure.identity import EnvironmentCredential, get_bearer_token_provider
from langchain.prompts.chat import ChatPromptTemplate

# Set up SQLAlchemy connection  
db_config = {  
    'drivername': 'mssql+pyodbc',  
    'username': os.getenv("SQL_SERVER_USERNAME") + '@' + os.getenv("SQL_SERVER"),  
    'password': os.getenv("SQL_SERVER_PASSWORD"),  
    'host': os.getenv("SQL_SERVER_ENDPOINT"),  
    'port': 1433,  
    'database': os.getenv("SQL_SERVER_DATABASE"),  
    'query': {'driver': 'ODBC Driver 18 for SQL Server'}  
}  
db_url = URL.create(**db_config)  
db = SQLDatabase.from_uri(db_url)  

# Authenticate using the Service Principal
token_provider = get_bearer_token_provider(
    EnvironmentCredential(),
    "https://cognitiveservices.azure.com/.default"
)

# Set up Azure OpenAI  
llm = AzureChatOpenAI(deployment_name="my-deployment-name-gpt-35-turbo-1106", azure_ad_token_provider = token_provider, temperature=0, max_tokens=4000)  

final_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", 
         """
          You are a helpful AI assistant expert in querying SQL Database to find answers to user's question about SQL tables.
         """
         ),
        ("user", "{question}\n ai: "),
    ]
)

# Set up SQL toolkit for LangChain Agent  
toolkit = SQLDatabaseToolkit(db=db, llm=llm)  
toolkit.get_tools()

# Initialize and run the Agent  
agent_executor = create_sql_agent(  
    llm=llm,  
    toolkit=toolkit,  
    verbose=True,  
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,  
    streaming=True,
    agent_executor_kwargs={'handle_parsing_errors':True},
)

agent_executor.invoke(final_prompt.format(
        question="count the rows in the titanic table."))

Error Message and Stack Trace (if applicable)

Entering new SQL Agent Executor chain... Traceback (most recent call last): File "test.py", line 62, in agent_executor.invoke(final_prompt.format( File "/home/user/.local/lib/python3.8/site-packages/langchain/chains/base.py", line 166, in invoke raise e File "/home/user/.local/lib/python3.8/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1636, in _call next_step_output = self._take_next_step( File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in _take_next_step [ File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in [ File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1370, in _iter_next_step output = self.agent.plan( File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 463, in plan for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3251, in stream yield from self.transform(iter([input]), config, kwargs) File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3238, in transform yield from self._transform_stream_with_config( File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 2052, in _transform_stream_with_config chunk: Output = context.run(next, iterator) # type: ignore File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3200, in _transform for output in final_pipeline: File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1270, in transform for ichunk in input: File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 5262, in transform yield from self.bound.transform( File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1288, in transform yield from self.stream(final, config, kwargs) File "/home/user/.local/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 360, in stream raise e File "/home/user/.local/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 340, in stream for chunk in self._stream(messages, stop=stop, kwargs): File "/home/user/.local/lib/python3.8/site-packages/langchain_openai/chat_models/base.py", line 489, in _stream with self.client.create(payload) as response: File "/home/user/.local/lib/python3.8/site-packages/openai/_utils/_utils.py", line 277, in wrapper return func(*args, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/openai/resources/chat/completions.py", line 643, in create return self._post( File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1266, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 942, in request return self._request( File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1031, in _request return self._retry_request( File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1079, in _retry_request return self._request( File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1031, in _request return self._retry_request( File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1079, in _retry_request return self._request( File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1046, in _request raise self._make_status_error_from_response(err.response) from None openai.InternalServerError: Error code: 500 - {'statusCode': 500, 'message': 'Internal server error', 'activityId': 'xxx-yyy-zzz'}

Description

System Info

Langchain version

langchain==0.2.10 langchain-community==0.2.9 langchain-core==0.2.22 langchain-openai==0.1.16 langchain-text-splitters==0.2.2

Platform

Windows 11

Python version

Python 3.8.10

Garett601 commented 4 months ago

Exactly the same issue. Worked last week, tried to test it this week and consistent 500 internal server error.

I have tried our chat completions agent and our completions agent in a standalone manner, they both work. It is only once we incorporate SQL agent and string it all together that we get the error.

mickey-ventura commented 4 months ago

Exactly the same issue. Worked last week, tried to test it this week and consistent 500 internal server error.

I have tried our chat completions agent and our completions agent in a standalone manner, they both work. It is only once we incorporate SQL agent and string it all together that we get the error.

Same here.

Garett601 commented 4 months ago

https://github.com/langchain-ai/langchain/discussions/24488#discussion-6962911

I am having the exact same problem. I have tested my chat LLM and my DB LLM individually and they work. Once I put it together in an agent chain, we run into the error

openai.InternalServerError: Error code: 500 - {'statusCode': 500, 'message': 'Internal server error', 'activityId': 'xxx-xxx-xxx-xxx'}

We have followed all the API documentation on how to build a SQL Agent, this was all working perfectly until Thursday last week (25/07/2024).

def init_langchain_client(
    chat: AzureChatOpenAI,
    memory: ConversationSummaryBufferMemory,
    suffix: str,
    tools: List[ToolType],
    system_prefix: str,
) -> AgentExecutor:
    """
    Initialise the Langchain client with necessary configurations.

    Returns
    -------
    AgentExecutor
        The agent executor configured with SQL database tools and Azure OpenAI models.
    """
    prompt = PromptTemplate.from_template(system_prefix + suffix)
    agent = create_react_agent(
        llm=chat,
        tools=tools,
        prompt=prompt,
    )
    agent_chain = AgentExecutor.from_agent_and_tools(
        agent=agent,
        tools=tools,
        verbose=True,
        memory=memory,
        handle_parsing_errors=True,
        return_intermediate_steps=True,
    )

    return agent_chain
langchain_agent_chain = init_langchain_client(
    chat=chat,
    memory=memory,
    suffix=SUFFIX_REACT,
    tools=tools,
    system_prefix=system_prefix,
)
langchain_agent_chain.invoke({"input":"Who are you?"})
---------------------------------------------------------------------------
InternalServerError                       Traceback (most recent call last)
Cell In[4], line 1
----> 1 langchain_agent_chain.invoke({"input":"Who are you?"})

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
    161 except BaseException as e:
    162     run_manager.on_chain_error(e)
--> 163     raise e
    164 run_manager.on_chain_end(outputs)
    166 if include_run_info:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
    150 try:
    151     self._validate_inputs(inputs)
    152     outputs = (
--> 153         self._call(inputs, run_manager=run_manager)
    154         if new_arg_supported
    155         else self._call(inputs)
    156     )
    158     final_outputs: Dict[str, Any] = self.prep_outputs(
    159         inputs, outputs, return_only_outputs
    160     )
    161 except BaseException as e:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/agents/agent.py:1432, in AgentExecutor._call(self, inputs, run_manager)
   1430 # We now enter the agent loop (until it returns something).
   1431 while self._should_continue(iterations, time_elapsed):
-> 1432     next_step_output = self._take_next_step(
   1433         name_to_tool_map,
   1434         color_mapping,
   1435         inputs,
   1436         intermediate_steps,
   1437         run_manager=run_manager,
   1438     )
   1439     if isinstance(next_step_output, AgentFinish):
   1440         return self._return(
   1441             next_step_output, intermediate_steps, run_manager=run_manager
   1442         )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/agents/agent.py:1138, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1129 def _take_next_step(
   1130     self,
   1131     name_to_tool_map: Dict[str, BaseTool],
   (...)
   1135     run_manager: Optional[CallbackManagerForChainRun] = None,
   1136 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1137     return self._consume_next_step(
-> 1138         [
   1139             a
   1140             for a in self._iter_next_step(
   1141                 name_to_tool_map,
   1142                 color_mapping,
   1143                 inputs,
   1144                 intermediate_steps,
   1145                 run_manager,
   1146             )
   1147         ]
   1148     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/agents/agent.py:1166, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1163     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
   1165     # Call the LLM to see what to do.
-> 1166     output = self.agent.plan(
   1167         intermediate_steps,
   1168         callbacks=run_manager.get_child() if run_manager else None,
   1169         **inputs,
   1170     )
   1171 except OutputParserException as e:
   1172     if isinstance(self.handle_parsing_errors, bool):

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/agents/agent.py:397, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
    389 final_output: Any = None
    390 if self.stream_runnable:
    391     # Use streaming to make sure that the underlying LLM is invoked in a
    392     # streaming
   (...)
    395     # Because the response from the plan is not a generator, we need to
    396     # accumulate the output into final output and return that.
--> 397     for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
    398         if final_output is None:
    399             final_output = chunk

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:2875, in RunnableSequence.stream(self, input, config, **kwargs)
   2869 def stream(
   2870     self,
   2871     input: Input,
   2872     config: Optional[RunnableConfig] = None,
   2873     **kwargs: Optional[Any],
   2874 ) -> Iterator[Output]:
-> 2875     yield from self.transform(iter([input]), config, **kwargs)

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:2862, in RunnableSequence.transform(self, input, config, **kwargs)
   2856 def transform(
   2857     self,
   2858     input: Iterator[Input],
   2859     config: Optional[RunnableConfig] = None,
   2860     **kwargs: Optional[Any],
   2861 ) -> Iterator[Output]:
-> 2862     yield from self._transform_stream_with_config(
   2863         input,
   2864         self._transform,
   2865         patch_config(config, run_name=(config or {}).get("run_name") or self.name),
   2866         **kwargs,
   2867     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:1881, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
   1879 try:
   1880     while True:
-> 1881         chunk: Output = context.run(next, iterator)  # type: ignore
   1882         yield chunk
   1883         if final_output_supported:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:2826, in RunnableSequence._transform(self, input, run_manager, config)
   2817 for step in steps:
   2818     final_pipeline = step.transform(
   2819         final_pipeline,
   2820         patch_config(
   (...)
   2823         ),
   2824     )
-> 2826 for output in final_pipeline:
   2827     yield output

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:1282, in Runnable.transform(self, input, config, **kwargs)
   1279 final: Input
   1280 got_first_val = False
-> 1282 for ichunk in input:
   1283     # The default implementation of transform is to buffer input and
   1284     # then call stream.
   1285     # It'll attempt to gather all input into a single chunk using
   1286     # the `+` operator.
   1287     # If the input is not addable, then we'll assume that we can
   1288     # only operate on the last chunk,
   1289     # and we'll iterate until we get to the last chunk.
   1290     if not got_first_val:
   1291         final = ichunk

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:4736, in RunnableBindingBase.transform(self, input, config, **kwargs)
   4730 def transform(
   4731     self,
   4732     input: Iterator[Input],
   4733     config: Optional[RunnableConfig] = None,
   4734     **kwargs: Any,
   4735 ) -> Iterator[Output]:
-> 4736     yield from self.bound.transform(
   4737         input,
   4738         self._merge_configs(config),
   4739         **{**self.kwargs, **kwargs},
   4740     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:1300, in Runnable.transform(self, input, config, **kwargs)
   1297             final = ichunk
   1299 if got_first_val:
-> 1300     yield from self.stream(final, config, **kwargs)

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:249, in BaseChatModel.stream(self, input, config, stop, **kwargs)
    242 except BaseException as e:
    243     run_manager.on_llm_error(
    244         e,
     245         response=LLMResult(
    246             generations=[[generation]] if generation else []
    247         ),
    248     )
--> 249     raise e
    250 else:
    251     run_manager.on_llm_end(LLMResult(generations=[[generation]]))

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:229, in BaseChatModel.stream(self, input, config, stop, **kwargs)
    227 generation: Optional[ChatGenerationChunk] = None
    228 try:
--> 229     for chunk in self._stream(messages, stop=stop, **kwargs):
    230         if chunk.message.id is None:
    231             chunk.message.id = f"run-{run_manager.run_id}"

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:480, in BaseChatOpenAI._stream(self, messages, stop, run_manager, **kwargs)
    477 params = {**params, **kwargs, "stream": True}
    479 default_chunk_class = AIMessageChunk
--> 480 with self.client.create(messages=message_dicts, **params) as response:
    481     for chunk in response:
    482         if not isinstance(chunk, dict):

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_utils/_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    275             msg = f"Missing required argument: {quote(missing[0])}"
    276     raise TypeError(msg)
--> 277 return func(*args, **kwargs)

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/resources/chat/completions.py:643, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, service_tier, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
    609 @required_args(["messages", "model"], ["messages", "model", "stream"])
    610 def create(
    611     self,
   (...)
    641     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    642 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 643     return self._post(
    644         "/chat/completions",
    645         body=maybe_transform(
    646             {
    647                 "messages": messages,
    648                 "model": model,
    649                 "frequency_penalty": frequency_penalty,
    650                 "function_call": function_call,
    651                 "functions": functions,
    652                 "logit_bias": logit_bias,
    653                 "logprobs": logprobs,
    654                 "max_tokens": max_tokens,
    655                 "n": n,
    656                 "parallel_tool_calls": parallel_tool_calls,
    657                 "presence_penalty": presence_penalty,
    658                 "response_format": response_format,
    659                 "seed": seed,
    660                 "service_tier": service_tier,
    661                 "stop": stop,
    662                 "stream": stream,
    663                 "stream_options": stream_options,
    664                 "temperature": temperature,
    665                 "tool_choice": tool_choice,
    666                 "tools": tools,
    667                 "top_logprobs": top_logprobs,
    668                 "top_p": top_p,
    669                 "user": user,
    670             },
    671             completion_create_params.CompletionCreateParams,
    672         ),
    673         options=make_request_options(
    674             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    675         ),
    676         cast_to=ChatCompletion,
    677         stream=stream or False,
    678         stream_cls=Stream[ChatCompletionChunk],
    679     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1250, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1236 def post(
   1237     self,
   1238     path: str,
   (...)
   1245     stream_cls: type[_StreamT] | None = None,
   1246 ) -> ResponseT | _StreamT:
   1247     opts = FinalRequestOptions.construct(
   1248         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1249     )
-> 1250     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:931, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    922 def request(
    923     self,
    924     cast_to: Type[ResponseT],
   (...)
    929     stream_cls: type[_StreamT] | None = None,
    930 ) -> ResponseT | _StreamT:
--> 931     return self._request(
    932         cast_to=cast_to,
    933         options=options,
    934         stream=stream,
    935         stream_cls=stream_cls,
    936         remaining_retries=remaining_retries,
    937     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1015, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
   1013 if retries > 0 and self._should_retry(err.response):
   1014     err.response.close()
-> 1015     return self._retry_request(
   1016         options,
   1017         cast_to,
   1018         retries,
   1019         err.response.headers,
   1020         stream=stream,
   1021         stream_cls=stream_cls,
   1022     )
   1024 # If the response is streamed then we need to explicitly read the response
   1025 # to completion before attempting to access the response text.
   1026 if not err.response.is_closed:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1063, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
   1059 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
   1060 # different thread if necessary.
   1061 time.sleep(timeout)
-> 1063 return self._request(
   1064     options=options,
   1065     cast_to=cast_to,
   1066     remaining_retries=remaining,
   1067     stream=stream,
   1068     stream_cls=stream_cls,
   1069 )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1015, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
   1013 if retries > 0 and self._should_retry(err.response):
   1014     err.response.close()
-> 1015     return self._retry_request(
   1016         options,
   1017         cast_to,
   1018         retries,
   1019         err.response.headers,
   1020         stream=stream,
   1021         stream_cls=stream_cls,
   1022     )
   1024 # If the response is streamed then we need to explicitly read the response
   1025 # to completion before attempting to access the response text.
   1026 if not err.response.is_closed:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1063, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
   1059 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
   1060 # different thread if necessary.
   1061 time.sleep(timeout)
-> 1063 return self._request(
   1064     options=options,
   1065     cast_to=cast_to,
   1066     remaining_retries=remaining,
   1067     stream=stream,
   1068     stream_cls=stream_cls,
   1069 )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1030, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
   1027         err.response.read()
   1029     log.debug("Re-raising status error")
-> 1030     raise self._make_status_error_from_response(err.response) from None
   1032 return self._process_response(
   1033     cast_to=cast_to,
   1034     options=options,
   (...)
   1037     stream_cls=stream_cls,
   1038 )

openai.InternalServerError: Error code: 500 - {'statusCode': 500, 'message': 'Internal server error', 'activityId': 'xxx-xxx-xxx-xxx'}
abhinabsarkar commented 4 months ago

Hello, is there any update on this issue that can be shared plz?

hk102 commented 3 months ago

I don't see the 'streaming' parameter in the create_sql_agent() method. Please try removing it and check again.

Reference: libs/community/langchain_community/agent_toolkits/sql/base.py.

Also, please re-check connectivity by running the following:

  1. db.run()
  2. llm()

This will help verify the connection and query execution.

dosubot[bot] commented 4 days ago

Hi, @abhinabsarkar. I'm Dosu, and I'm helping the LangChain team manage their backlog. I'm marking this issue as stale.

Issue Summary:

Next Steps:

Thank you for your understanding and contribution!