Closed jyoti194 closed 6 months ago
🤖
Hey @jyoti194, nice to see you back here! How's everything going on your end?
It seems like you're trying to use the ConversationBufferMemory
class to maintain the context of the conversation in your SQLDatabaseChain
. However, based on the error messages you're receiving, it appears that the LLMChain
and SQLDatabaseChain
classes are not properly handling the memory object.
In the first case, where you're using SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
, the error message 'input'
suggests that the ConversationBufferMemory
object is not being properly initialized or used. It's possible that the input
key is not being found in the memory object, which is causing the error.
In the second case, where you're using:
db_chain = SQLDatabaseChain(
llm_chain=LLMChain(llm, memory=memory),
database=db,
verbose=True
)
The error message TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
suggests that the LLMChain
class is not expecting a memory
argument in its constructor.
Unfortunately, without more information about the implementation of these classes, it's difficult to provide a more specific solution. However, based on the context provided, it seems that the ConversationBufferMemory
class is intended to be used with the RunnablePassthrough.assign
function to load the history of the conversation, and the save_context
method to save the context of the conversation after the SQL query is run and the output is parsed.
Here is the relevant code from the context:
memory = ConversationBufferMemory(return_messages=True)
# Chain to query with memory
sql_chain = (
RunnablePassthrough.assign(
schema=get_schema,
history=RunnableLambda(lambda x: memory.load_memory_variables(x)["history"]),
)
| prompt
| llm.bind(stop=["\nSQLResult:"])
| StrOutputParser()
)
def save(input_output):
output = {"output": input_output.pop("output")}
memory.save_context(input_output, output)
return output["output"]
In this code, memory
is an instance of the ConversationBufferMemory
class. The load_memory_variables
method is used to load the history of the conversation, and the save_context
method is used to save the context of the conversation.
You might want to try a similar approach in your code, using the RunnablePassthrough.assign
function to load the history from the memory
object, and the save_context
method to save the context after the SQL query is run.
If you're still having trouble, could you provide more information about how the LLMChain
and SQLDatabaseChain
classes are implemented, and how they're supposed to interact with the ConversationBufferMemory
class? This would help in providing a more specific solution.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosubot
can you give me the method, how add memory in SQLDatabaseChain, or in above code?
🤖
From your code, it seems like you're trying to use the ConversationBufferMemory
class as the memory for your SQLDatabaseChain
. However, the ConversationBufferMemory
class is designed to be used with the ChatOpenAI
class, not with the SQLDatabaseChain
class.
The SQLDatabaseChain
class doesn't have a built-in way to handle memory. If you want to add memory to the SQLDatabaseChain
, you would need to create a custom memory class that inherits from the BaseMemory
class and implement the abstract methods.
Here's a basic example of how you can create a custom memory class:
from langchain.memory import BaseMemory
from typing import Dict, Any, List
class CustomMemory(BaseMemory):
memories: Dict[str, Any] = dict()
@property
def memory_variables(self) -> List[str]:
return list(self.memories.keys())
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
return self.memories
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
pass
def clear(self) -> None:
self.memories.clear()
In this example, the memory_variables
method returns the keys of the memories
dictionary, the load_memory_variables
method loads the memory variables from the dictionary, the save_context
method is a placeholder that you need to implement based on your requirements, and the clear
method clears the memory contents.
You can then use this custom memory class with your SQLDatabaseChain
like this:
memory = CustomMemory()
db_chain = SQLDatabaseChain(
llm_chain=LLMChain(llm),
database=db,
verbose=True,
memory=memory
)
Please note that this is a basic example and might need to be adjusted based on your specific requirements.
For more information about the BaseMemory
class and how to implement a custom memory class, you can refer to the LangChain documentation and the source code of the BaseMemory
class.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
from langchain.memory import BaseMemory ImportError: cannot import name 'BaseMemory' from 'langchain.memory'
@dosubot here is my code for adding memory to the chatbot working with SQLDatabaseChain import pandas as pd import sqlalchemy as sal import os, sys #openai import constants from langchain.sql_database import SQLDatabase from langchain.llms.openai import OpenAI from langchain_experimental.sql import SQLDatabaseChain from sqlalchemy import create_engine, text, exc from langchain.chat_models import ChatOpenAI from typing import List, Optional import openai from secret_key import openapi_key from langchain.prompts import PromptTemplate from secret_key import openapi_key from fpdf import FPDF from langchain_experimental.tools import PythonREPLTool from langchain_experimental.tools.python import tool
import dotenv dotenv.load_dotenv(override=False) from langchain import SQLDatabase from langchain.agents import (AgentExecutor, Tool, ZeroShotAgent, initialize_agent, load_tools)
from langchain.chains.conversation.memory import ConversationBufferMemory
from typing import Dict, Any, List from langchain.utilities import SQLDatabase from langchain.prompts import PromptTemplate
os.environ['OPENAI_API_KEY'] = openapi_key
from urllib.parse import quote_plus server_name = constants.server_name database_name = constants.database_name username = constants.username password = constants.password encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-instruct" include_tables=['EGV_emp_departments_ChatGPT']
db = SQLDatabase(engine, view_support=True, include_tables=include_tables)
llm = OpenAI(temperature=0, verbose=True, model=model_name)
memory = ConversationBufferMemory()
PROMPT_SUFFIX = """Only use the following tables: {table_info} Previous Conversation: {history} Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database. Never query for all the columns from a specific table, only ask for a the few relevant columns given the question. Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table. Use the following format: Question: Question here SQLQuery: SQL Query to run SQLResult: Result of the SQLQuery Answer: Final answer here """
PROMPT = PromptTemplate.from_template( _DEFAULT_TEMPLATE + PROMPT_SUFFIX, )
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
db_chain.run("to which department they belong too") print(memory.load_memory_variables({}))
Entering new SQLDatabaseChain chain... what is employee name of AD22050853 SQLQuery:SELECT EmployeeName FROM EGV_emp_departments_ChatGPT WHERE EmployeeID = 'AD####' SQLResult: [('Harin V#####',)] Answer:Harin V##### Finished chain. {'history': 'Human: what is employee name of AD####\nAI: Harin V#####'}
but after running 2nd question, its not taking the 1st question into consideration, instead it like a new question
Entering new SQLDatabaseChain chain... to which department they belong too SQLQuery:SELECT DepartmentName FROM EGV_emp_departments_ChatGPT WHERE EmployeeName = 'S####' SQLResult: [('Facility Management',)] Answer:Facility Management Finished chain. {'history': 'Human: to which department they belong too\nAI: Facility Management'}
🤖
Based on the code you've provided, it seems like you're trying to use the ConversationBufferMemory
class to add memory to the SQLDatabaseChain
. However, the ConversationBufferMemory
class is designed to store conversation memory, not to be used directly with the SQLDatabaseChain
.
The error message "Unknown Error Occured" related to the 'input' key might be due to an overlap between the keys in the memory and the 'input' key. The validate_prompt_input_variables
method in the ConversationChain
class checks for this condition and raises a ValueError if there's an overlap. To avoid this, you need to ensure that the 'input' key you're using doesn't overlap with the keys in the memory.
Here's how you can do it:
from langchain.chains import ConversationChain
from langchain.llms import OpenAI
from langchain.memory.sql import SQLDatabaseMemory # Assuming this is the SQL memory class
# Initialize the memory with non-overlapping keys
memory = SQLDatabaseMemory(keys=['key1', 'key2', 'key3'])
# Initialize the conversation chain with the memory
conversation = ConversationChain(llm=OpenAI(), memory=memory)
In the above code, replace 'key1'
, 'key2'
, and 'key3'
with the actual keys you want to use. Make sure that these keys don't include 'input'
.
As for the TypeError you're encountering when using the LLMChain
class, without the specific error message and the code that's causing it, it's hard to provide a solution. Could you please provide more details about this error?
Regarding the ImportError you're getting when trying to import BaseMemory
from langchain.memory
, it seems like BaseMemory
might not be a part of the langchain.memory
module in the version of LangChain you're using. You might want to check the documentation or the source code of the LangChain library to see if BaseMemory
is available and how to use it.
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
hi @dosubot here is my code which is using ConversationBufferMemory to store the memory os.environ['OPENAI_API_KEY'] = openapi_key
from urllib.parse import quote_plus server_name = constants.server_name database_name = constants.database_name username = constants.username password = constants.password encoded_password = quote_plus(password) connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri) model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
db_chain = None
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
PROMPT_SUFFIX = """Only use the following tables: {table_info} Previous Conversation: {history} Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns. You can order the results by a relevant column to return the most interesting examples in the database. Never query for all the columns from a specific table, only ask for a the few relevant columns given the question. Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table. Return the answer in user friendly form.
"""
PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX)
memory = None
def chat1(question):
global db_chain
global memory
if memory == None:
# llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
memory = ConversationBufferMemory()
# db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
# while True:
# try:
# print("*****")
# print(memory.load_memory_variables({})['history'])
# question = input("Enter your Question : ")
greetings = ["hi", "hello", "hey"]
if any(greeting == question.lower() for greeting in greetings):
print(question)
print("Hello! How can I assist you today?")
return "Hello! How can I assist you today?"
else:
answer = db_chain.run(question)
print(memory.load_memory_variables({}))
return answer
but here when im asking 2nd question its trowing error as "pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'." but when im asking 3rd question which is not related to first 2, its giving answer and storing memory, like this
what is jyothi employee id
Entering new SQLDatabaseChain chain... what is jyothi employee id SQLQuery:SELECT EmployeeID FROM EGV_emp_departments_ChatGPT WHERE EmployeeName = 'Jyothi' SQLResult: [('AD23020933',)] Answer:Jyothi's employee ID is AD23020933. Finished chain. {'history': "Human: what is jyothi employee id\nAI: Jyothi's employee ID is AD23020933."} Jyothi's employee ID is AD23020933. 127.0.0.1 - - [05/Feb/2024 11:01:28] "GET /get_answer?questions=what%20is%20jyothi%20employee%20id HTTP/1.1" 200 - what is her mail id
Entering new SQLDatabaseChain chain... what is her mail id SQLQuery:SELECT UserMail FROM EGV_emp_departments_ChatGPT WHERE EmployeeName = 'Jyothi' Answer: Jyothi's email ID is jyothi@example.com.[2024-02-05 11:01:45,039] ERROR in app: Exception on /get_answer [GET] Traceback (most recent call last): File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context self.dialect.do_execute( File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute cursor.execute(statement, parameters) pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Unclosed quotation mark after the character string 's email ID is jyothi@example.com.'. (105)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 1455, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 869, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function return cors_after_request(app.make_response(f(*args, kwargs))) ^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 867, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 852, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(view_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\flask8.py", line 46, in generate_answer answer = chat1(questions) ^^^^^^^^^^^^^^^^ File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main5.py", line 180, in chat1 answer = db_chain.run(question) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 505, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 310, in call raise e File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 304, in call self._call(inputs, run_manager=run_manager) File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\sql\base.py", line 208, in _call raise exc File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\sql\base.py", line 143, in _call result = self.database.run(sql_cmd) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\utilities\sql_database.py", line 433, in run result = self._execute(command, fetch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\utilities\sql_database.py", line 411, in _execute cursor = connection.execute(text(command)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1416, in execute return meth( ^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\sql\elements.py", line 516, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1639, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1848, in _execute_context return self._exec_single_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1988, in _exec_single_context self._handle_dbapi_exception( File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 2343, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context self.dialect.do_execute( File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Unclosed quotation mark after the character string 's email ID is jyothi@example.com.'. (105)") [SQL: SELECT UserMail FROM EGV_emp_departments_ChatGPT WHERE EmployeeName = 'Jyothi' Answer: Jyothi's email ID is jyothi@example.com.] (Background on this error at: https://sqlalche.me/e/20/f405) 127.0.0.1 - - [05/Feb/2024 11:01:45] "GET /get_answer?questions=what%20is%20her%20mail%20id HTTP/1.1" 500 - what is employee name of ad22050853
Entering new SQLDatabaseChain chain... what is employee name of ad22050853 SQLQuery:SELECT EmployeeName FROM EGV_emp_departments_ChatGPT WHERE EmployeeID = 'AD22050853' SQLResult: [('Harin Vimal Bharathi',)] Answer:The employee name of AD22050853 is Harin Vimal Bharathi. Finished chain. {'history': "Human: what is jyothi employee id\nAI: Jyothi's employee ID is AD23020933.\nHuman: what is employee name of ad22050853\nAI: The employee name of AD22050853 is Harin Vimal Bharathi."} The employee name of AD22050853 is Harin Vimal Bharathi. 127.0.0.1 - - [05/Feb/2024 11:03:48] "GET /get_answer?questions=%20what%20is%20employee%20name%20of%20ad22050853 HTTP/1.1" 200 -
🤖
Based on the code snippet you've provided and the error messages you're encountering, it seems like there might be a couple of issues at play here.
Firstly, the "Unknown Error Occured" message with 'input' as the error message might be due to a conflict with the 'input' key in the memory keys and the input key. This could be causing a ValueError as per the validate_prompt_input_variables
method in the ConversationChain
class. To resolve this, you need to ensure that the 'input' key is not present in the memory keys of the SQLDatabaseChain
. If it is, you need to rename it to something else.
Secondly, the TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
might be due to the way you're trying to instantiate the SQLDatabaseChain
. The from_llm()
method in the LLMRouterChain
class is a convenience constructor that takes a language model (llm
), a prompt (prompt
), and any additional keyword arguments (**kwargs
). It creates an instance of LLMChain
using the provided llm
and prompt
, and then returns an instance of LLMRouterChain
using the created LLMChain
and the additional keyword arguments.
Here's an example of how you can modify your code to address these issues:
from langchain.chains import ConversationChain
from langchain.llms import OpenAI
from langchain.memory.sql import SQLDatabaseChain
# Ensure 'input' key is not in the memory keys of SQLDatabaseChain
sql_memory = SQLDatabaseChain()
sql_memory.memory_variables = [key for key in sql_memory.memory_variables if key != 'input']
conversation = ConversationChain(llm=OpenAI(), memory=sql_memory)
This code first creates an instance of SQLDatabaseChain
and removes the 'input' key from its memory variables if it exists. Then, it creates an instance of ConversationChain
with OpenAI
as the language model and the modified SQLDatabaseChain
as the memory.
Please try this solution and let me know if it resolves your issue or if you need further assistance.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Checked other resources
Example Code
Set the OPENAI_API_KEY environment variable
os.environ['OPENAI_API_KEY'] = openapi_key
Define connection parameters using constants
from urllib.parse import quote_plus server_name = constants.server_name database_name = constants.database_name username = constants.username password = constants.password encoded_password = quote_plus(password) connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
Create an engine to connect to the SQL database
engine = create_engine(connection_uri) model_name="gpt-3.5-turbo-16k" db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
Define a function named chat that takes a question and SQL format indicator as input
def chat1(question):
here is m code in which i'm trying to integrate the memory so that the model can remember previous question and and answer for next question, but im not sure with the exact method,
Error Message and Stack Trace (if applicable)
in above code while running with db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
where i have given memory=memory, but here while running the code im getting answer as "Unknown Error Occured" which is just goin to exception
like this The question: what is employee name ofAD####
SQLQuery:SELECT [EmployeeName] FROM [EGV_emp_departments_ChatGPT] WHERE [EmployeeID] = 'AD####'SELECT [EmployeeName] FROM [EGV_emp_departments_ChatGPT] WHERE [EmployeeID] = 'AD####' [('H########i',)]
SQLResult: [('H########',)] Answer:The employee name of AD#### is H######.
while using like this
db_chain = SQLDatabaseChain( llm_chain=LLMChain(llm, memory=memory), database=db, verbose=True )
its not fetching the answer
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain__init.py:34: UserWarning: Importing LLMChain from langchain root module is no longer supported. Please use langchain.chains.LLMChain instead. warnings.warn( C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain__init.py:34: UserWarning: Importing OpenAI from langchain root module is no longer supported. Please use langchain.llms.OpenAI instead. warnings.warn( C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\init__.py:34: UserWarning: Importing SQLDatabase from langchain root module is no longer supported. Please use langchain.utilities.SQLDatabase instead. warnings.warn( Traceback (most recent call last): File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 217, in
result= chat1("what is employee name of AD22050853")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 115, in chat1
llm_chain=LLMChain(llm, memory=memory),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Serializable. init__() takes 1 positional argument but 2 were given
Description
in above code while running with db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
where i have given memory=memory, but here while running the code im getting answer as "Unknown Error Occured" which is just goin to exception
like this The question: what is employee name ofAD####
SQLQuery:SELECT [EmployeeName] FROM [EGV_emp_departments_ChatGPT] WHERE [EmployeeID] = 'AD####'SELECT [EmployeeName] FROM [EGV_emp_departments_ChatGPT] WHERE [EmployeeID] = 'AD####' [('H########i',)]
SQLResult: [('H########',)] Answer:The employee name of AD#### is H######.
while using like this
db_chain = SQLDatabaseChain( llm_chain=LLMChain(llm, memory=memory), database=db, verbose=True )
its not fetching the answer
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain__init.py:34: UserWarning: Importing LLMChain from langchain root module is no longer supported. Please use langchain.chains.LLMChain instead. warnings.warn( C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain__init.py:34: UserWarning: Importing OpenAI from langchain root module is no longer supported. Please use langchain.llms.OpenAI instead. warnings.warn( C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\init__.py:34: UserWarning: Importing SQLDatabase from langchain root module is no longer supported. Please use langchain.utilities.SQLDatabase instead. warnings.warn( Traceback (most recent call last): File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 217, in
result= chat1("what is employee name of AD22050853")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 115, in chat1
llm_chain=LLMChain(llm, memory=memory),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Serializable. init__() takes 1 positional argument but 2 were given
System Info
python : 3.11 langchain: latest