lmstudio-ai / lmstudio.js

LM Studio TypeScript SDK (pre-release public alpha)
https://lmstudio.ai/docs/sdk
Apache License 2.0
474 stars 81 forks source link

ToolCall issue in LM Studio - Model : Llama 3.1 #75

Open Vikneshkumarmohan opened 1 month ago

Vikneshkumarmohan commented 1 month ago

ToolCall is not generating from the response of llama 3.1 model from LM Studio, when using langchain framework connecting through ChatOpenAI , Same Tool call is working fine with ollama for the same llama 3.1 model ,

as per langchain team , the response was not proper from the model , the tool call is working as expected with ollama ,

you can also refer this issue #26342 in langchain-ai

below is the code snippet

`

from typing import #Annotated

from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from typing_extensions import TypedDict

from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode, tools_condition import os from langchain_openai import ChatOpenAI from langgraph.graph import END, START, StateGraph

Getting the Env value TAVILY_API_KEY = os.getenv("TAVILY_API_KEY")

Point to the local server llm = ChatOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")

class State(TypedDict): messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

tool = TavilySearchResults(max_results=2,include_answer="true") tools = [tool]

llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State): print(State) return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool]) graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges( "chatbot", tools_condition, )

Any time a tool is called, we return to the chatbot to decide the next step graph_builder.add_edge("tools", "chatbot") graph_builder.set_entry_point("chatbot") graph = graph_builder.compile()

from langchain_core.messages import BaseMessage

while True: user_input = input("User: ") if user_input.lower() in ["quit", "exit", "q"]: print("Goodbye!") break for event in graph.stream({"messages": [("user", user_input)]}): for value in event.values(): if isinstance(value["messages"][-1], BaseMessage): print("Assistant:", value["messages"][-1].content)`

whogben commented 1 week ago

As far as I know LMStudio does not yet support tool calls, but it will be fantastic when it does! There's a small chance I'm wrong and someone will correct me, I just couldn't find any reference to tool calling in the docs and the tools parameter is not listed on the server chat completions endpoint. If not, gotta stick to ollama and other alternatives when using anything that relies on tool calls.

hansvdam commented 6 days ago

No It does not seem to support it unfortunately.