lmstudio-ai / lmstudio-bug-tracker

Bug tracking for the LM Studio desktop application
10 stars 3 forks source link

ToolCall issue in LM Studio - Model : Llama 3.1 #75 #126

Open Vikneshkumarmohan opened 2 months ago

Vikneshkumarmohan commented 2 months ago

ToolCall is not generating from the response of llama 3.1 model from LM Studio, when using langchain framework connecting through ChatOpenAI , Same Tool call is working fine with ollama for the same llama 3.1 model ,

as per langchain team , the response was not proper from the model , the tool call is working as expected with ollama ,

you can also refer this issue #26342 in langchain-ai

below is the code snippet

`

from typing import #Annotated

from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from typing_extensions import TypedDict

from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode, tools_condition import os from langchain_openai import ChatOpenAI from langgraph.graph import END, START, StateGraph

Getting the Env value TAVILY_API_KEY = os.getenv("TAVILY_API_KEY")

Point to the local server llm = ChatOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")

class State(TypedDict): messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

tool = TavilySearchResults(max_results=2,include_answer="true") tools = [tool]

llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State): print(State) return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool]) graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges( "chatbot", tools_condition, )

Any time a tool is called, we return to the chatbot to decide the next step graph_builder.add_edge("tools", "chatbot") graph_builder.set_entry_point("chatbot") graph = graph_builder.compile()

from langchain_core.messages import BaseMessage

while True: user_input = input("User: ") if user_input.lower() in ["quit", "exit", "q"]: print("Goodbye!") break for event in graph.stream({"messages": [("user", user_input)]}): for value in event.values(): if isinstance(value["messages"][-1], BaseMessage): print("Assistant:", value["messages"][-1].content)`

yagil commented 2 weeks ago

We're about to start a beta for this. If you're interested, please fill out this google form: https://docs.google.com/forms/d/e/1FAIpQLSfqpRunKgv2ui4_CWyeQ88gFtOG6CEqFSbpRrHWgJjs2mIodw/viewform?usp=sf_link