langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.93k stars 15.38k forks source link

ChatOllama is not supporting bind_tools as good as ChatGroq even if model used is same #26335

Open raj-acrivon opened 2 months ago

raj-acrivon commented 2 months ago

Checked other resources

Example Code

If I run the same code with ChatGroq I am able to retrieve relevant table

What I am doing wrong, My Goal is to use it with LangGraph so I structured fake_state.

llm = ChatOllama( model="llama3-groq-tool-use:8b", temperature=0, top_k=10, top_p = 0.5, repeat_penalty = 1.3,

num_thread = multiprocessing.cpu_count() -1,

)

db = SQLDatabase.from_uri("sqlite:////home/azureuser/ai_chatbot/sample.db", sample_rows_in_table_info=3)

toolkit = SQLDatabaseToolkit(db=db, llm=llm) tools = toolkit.get_tools()

class ToolInput(BaseModel): tables: str = Field(description = "Input is a comma-separated list of relevant tables (e.g., 'table1, table2, table3')")

@tool("get_schema_tool_use", args_schema = ToolInput) def get_schema_tool_use(tables:str) -> str: """ Input is a comma-separated list of relevant tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist in list_tables_tool output. Example Input tables: table1, table2, table3 """ return next(tool for tool in tools if tool.name == "sql_db_schema").invoke(tables)

from langchain_core.prompts.chat import BaseMessage, HumanMessage, AIMessage, ChatPromptValue

local llm with tool access

llm_with_tools = llm.bind_tools( tools = [get_schema_tool_use], tool_choice = {"type":"function", "function": {"name":"get_schema_tool_use"}}, )

Example fake state with a list of messages

fake_state = ChatPromptValue(messages= [HumanMessage(content='What is the mean phosphorylation level of the X across samples in the Y from quant table?', id='a6bab255-2619-4b95-90b9-f3d8b321a99a'), AIMessage(content='', id='75948a2f-0ba8-4249-bdb4-8a2e6b447bc', tool_calls=[{'name': 'list_tables_tool', 'args': {}, 'id': 'tool_abcd123', 'type': 'tool_call'}]), ToolMessage(content='de, description, gsea, metadata,, quant', name='list_tables_tool', id='4a94a0f0-8481-404c-8957-b5e8acf3d2da', tool_call_id='tool_abcd123'), ])

response = llm_with_tools.invoke(fake_state) reponse #tool_calls = [] (AIMessage(content="I'm sorry but I do not have enough information to complete that task. Can you provide more details or clarify your question?", response_metadata={'model': 'llama3-groq-tool-use:8b', 'created_at': '2024-09-11T17:05:40.432974842Z', 'message': {'role': 'assistant', 'content': "I'm sorry but I do not have enough information to complete that task. Can you provide more details or clarify your question?"}, 'done_reason': 'stop', 'done': True, 'total_duration': 26541844067, 'load_duration': 2809424482, 'prompt_eval_count': 299, 'prompt_eval_duration': 19489875000, 'eval_count': 26, 'eval_duration': 4114271000}, id='run-5a5f656d-9217-4668-97d2-d545487f0d82-0', usage_metadata={'input_tokens': 299, 'output_tokens': 26, 'total_tokens': 325}) )

If I run the same code with ChatGroq I am able to retrieve relevant table

What I am doing wrong, My Goal is to use it with LangGraph so I structured fake_state.

I am able to run belowcode, but not above code: %%time

class UserInput(BaseModel): location: str = Field(description = "The city and state of the user input e.g.: Los Angeles") unit: str = Field(..., description = "Unit of measurement for weather (e.g., 'Fahrenheit', 'CELSIUS')")

@tool("weather_monuments_call", args_schema = UserInput) def weather_monuments_call(location: str, unit: str, city_traveling_spots: list[str]) -> str: """Get weather and best three places to visit at given location""" return f"Current weather in {location} is 22 {unit}."

local llm with tool access

llm_with_tools = llm.bind_tools( tools = [weather_monuments_call], tool_choice = {"type":"function", "function": {"name":"weather_monuments_call"}} )

response = llm_with_tools.invoke("What is weather in San Fracisco in celsius and what are best places to visit in San Fracisco?") response.tool_calls

I have exhausted a lot of available online blogs but can't resolve my error. Can anyone help on this :)

Error Message and Stack Trace (if applicable)

No response

Description

If I run the same code with ChatGroq I am able to retrieve relevant table

What I am doing wrong, My Goal is to use it with LangGraph so I structured fake_state.

llm = ChatOllama( model="llama3-groq-tool-use:8b", temperature=0, top_k=10, top_p = 0.5, repeat_penalty = 1.3,

num_thread = multiprocessing.cpu_count() -1,

)

db = SQLDatabase.from_uri("sqlite:////home/azureuser/ai_chatbot/sample.db", sample_rows_in_table_info=3)

toolkit = SQLDatabaseToolkit(db=db, llm=llm) tools = toolkit.get_tools()

class ToolInput(BaseModel): tables: str = Field(description = "Input is a comma-separated list of relevant tables (e.g., 'table1, table2, table3')")

@tool("get_schema_tool_use", args_schema = ToolInput) def get_schema_tool_use(tables:str) -> str: """ Input is a comma-separated list of relevant tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist in list_tables_tool output. Example Input tables: table1, table2, table3 """ return next(tool for tool in tools if tool.name == "sql_db_schema").invoke(tables)

from langchain_core.prompts.chat import BaseMessage, HumanMessage, AIMessage, ChatPromptValue

local llm with tool access

llm_with_tools = llm.bind_tools( tools = [get_schema_tool_use], tool_choice = {"type":"function", "function": {"name":"get_schema_tool_use"}}, )

Example fake state with a list of messages

fake_state = ChatPromptValue(messages= [HumanMessage(content='What is the mean phosphorylation level of the X across samples in the Y from quant table?', id='a6bab255-2619-4b95-90b9-f3d8b321a99a'), AIMessage(content='', id='75948a2f-0ba8-4249-bdb4-8a2e6b447bc', tool_calls=[{'name': 'list_tables_tool', 'args': {}, 'id': 'tool_abcd123', 'type': 'tool_call'}]), ToolMessage(content='de, description, gsea, metadata,, quant', name='list_tables_tool', id='4a94a0f0-8481-404c-8957-b5e8acf3d2da', tool_call_id='tool_abcd123'), ])

response = llm_with_tools.invoke(fake_state) reponse #tool_calls = [] (AIMessage(content="I'm sorry but I do not have enough information to complete that task. Can you provide more details or clarify your question?", response_metadata={'model': 'llama3-groq-tool-use:8b', 'created_at': '2024-09-11T17:05:40.432974842Z', 'message': {'role': 'assistant', 'content': "I'm sorry but I do not have enough information to complete that task. Can you provide more details or clarify your question?"}, 'done_reason': 'stop', 'done': True, 'total_duration': 26541844067, 'load_duration': 2809424482, 'prompt_eval_count': 299, 'prompt_eval_duration': 19489875000, 'eval_count': 26, 'eval_duration': 4114271000}, id='run-5a5f656d-9217-4668-97d2-d545487f0d82-0', usage_metadata={'input_tokens': 299, 'output_tokens': 26, 'total_tokens': 325}) )

If I run the same code with ChatGroq I am able to retrieve relevant table

What I am doing wrong, My Goal is to use it with LangGraph so I structured fake_state.

I am able to run below code, but not above code:

%%time

class UserInput(BaseModel): location: str = Field(description = "The city and state of the user input e.g.: Los Angeles") unit: str = Field(..., description = "Unit of measurement for weather (e.g., 'Fahrenheit', 'CELSIUS')")

@tool("weather_monuments_call", args_schema = UserInput) def weather_monuments_call(location: str, unit: str, city_traveling_spots: list[str]) -> str: """Get weather and best three places to visit at given location""" return f"Current weather in {location} is 22 {unit}."

local llm with tool access

llm_with_tools = llm.bind_tools( tools = [weather_monuments_call], tool_choice = {"type":"function", "function": {"name":"weather_monuments_call"}} )

response = llm_with_tools.invoke("What is weather in San Fracisco in celsius and what are best places to visit in San Fracisco?") response.tool_calls

I have exhausted a lot of available online blogs but can't resolve my error. Can anyone help on this :)

System Info

langchain==0.2.14 langchain-community==0.2.12 langchain-core==0.2.39 langchain-experimental==0.0.64 langchain-groq==0.1.9 langchain-huggingface==0.0.3 langchain-ollama==0.1.3 langchain-openai==0.1.23 langchain-text-splitters==0.2.4

Mhijazi16 commented 2 months ago

Dman, I have a similar issues mine is that when i try to use tools with ChatOllama with react_agent it only executes one tool but when i use the same model with ChatGroq it uses all the tools at once

llm_with_tools = ChatGroq(model="llama3-groq-8b-8192-tool-use-preview", temperature=0).bind_tools([add,multiply,divide])
llm_with_tools = ChatOllama(model="llama3-groq-tool-use", temperature=0).bind_tools([add,multiply,divide])

the prompt is "use the tools to add 5 to 3 after that multiply the result by 2 and finally divide it by 4" when i use it with ollama model it only adds 5 to 3 and returns 8 but when i use the same model with Groq it uses the three tools and retuns 4

this is what i see in langsmith for groq : image and this is for the ollama model : image

if you fixed it or can help please answer

Mhijazi16 commented 2 months ago

is there a way to boost this issue ??

raj-acrivon commented 2 months ago

Current Status: https://github.com/langchain-ai/langchain/issues/21479#issuecomment-2353808289