run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.28k stars 5.17k forks source link

Unable to use a local model #14872

Closed JulienSantiago closed 3 months ago

JulienSantiago commented 3 months ago

Feature Description

from llama_index.core.tools import FunctionTool

import yfinance as yf
import csv
import pandas as pd

def get_stock_price(stock_name:str) -> str:
    """Gives the current price of a stock"""
    try:
        stock = yf.Ticker(stock_name)
        price = stock.history(period="1d")['Close'].iloc[-1]
        return f"Le cours actuel de {stock_name} est {price:.2f} USD."
    except Exception as e:
        return f"Désolé, je n'ai pas pu récupérer le cours boursier pour {stock_name}. Erreur: {e}"

get_price_tool = FunctionTool.from_defaults(fn=get_stock_price)

response = llm.predict_and_call(
    [get_price_tool], 
    "What is the current price of AAPL?", 
    verbose=True
)
print(str(response))

This code works well, and returns :

Thought: The current language of the user is: english. I need to use a tool to help me answer the question. Action: get_stock_price Action Input: {'stock_name': 'AAPL'} Observation: Le cours actuel de AAPL est 224.31 USD. Le cours actuel de AAPL est 224.31 USD.

Which is the correct price for the stock when I write the issue.

However, I am trying to use an agentic strategy in order to answer more complex queries, and the code :

from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.core.agent import AgentRunner

agent_worker = FunctionCallingAgentWorker.from_tools(
    [get_price_tool, get_cash_tool], 
    llm=llm, 
    verbose=True
)
agent = AgentRunner(agent_worker)

Gives the error :

ValueError Traceback (most recent call last) Cell In[16], line 4 1 from llama_index.core.agent import FunctionCallingAgentWorker 2 from llama_index.core.agent import AgentRunner ----> 4 agent_worker = FunctionCallingAgentWorker.from_tools( 5 [get_price_tool, get_cash_tool], 6 llm=llm, 7 verbose=True 8 ) 9 agent = AgentRunner(agent_worker)

File ~/anaconda3/envs/intellisearch/lib/python3.12/site-packages/llama_index/core/agent/function_calling/step.py:155, in FunctionCallingAgentWorker.from_tools(cls, tools, tool_retriever, llm, verbose, max_function_calls, callback_manager, system_prompt, prefix_messages, kwargs) 151 prefix_messages = [ChatMessage(content=system_prompt, role="system")] 153 prefix_messages = prefix_messages or [] --> 155 return cls( 156 tools=tools, 157 tool_retriever=tool_retriever, 158 llm=llm, 159 prefix_messages=prefix_messages, 160 verbose=verbose, 161 max_function_calls=max_function_calls, 162 callback_manager=callback_manager, 163 kwargs, 164 ) ... 101 ) 102 self._llm = llm 103 self._verbose = verbose

ValueError: Model name /var/model/mistral-7b-instruct-v0.2.Q5_K_M.gguf does not support function calling API.

In the first example, it can call tools, then why can't we use the same strategy for function calling agent workers?

Reason

No response

Value of Feature

No response

dosubot[bot] commented 3 months ago

Hey, @JulienSantiago! I'm here to assist you with any bugs, questions, or contributions you might have.

The error occurs because the model you are using does not support the function calling API. In the first example, the model supports tool calling because it uses a FunctionCallingLLM with the is_function_calling_model attribute set to True. However, in the agentic strategy, the model you are using does not have this attribute set correctly, leading to the ValueError.

To resolve this issue, ensure that the FunctionCallingLLM you are using has the is_function_calling_model attribute set to True. Here is the relevant code snippet that checks this:

if not llm.metadata.is_function_calling_model:
    raise ValueError(
        f"Model name {llm.model} does not support function calling API. "
    )

Make sure your FunctionCallingLLM is correctly configured to support function calling. The FunctionCallingLLM class in the LlamaIndex framework provides methods like chat_with_tools, achat_with_tools, get_tool_calls_from_response, predict_and_call, and apredict_and_call to handle tool calling. Ensure these methods are properly implemented and that the is_function_calling_model attribute is set to True in the metadata [1][2][3].

To continue talking to Dosu, mention @dosu.

JulienSantiago commented 3 months ago

Edit : The difference is that the first code uses a react agent.
And the code

from llama_index.core.agent import ReActAgentWorker
from llama_index.core.agent import AgentRunner

agent_worker = ReActAgentWorker.from_tools(
    [get_price_tool, get_cash_tool], 
    llm=llm, 
    verbose=True
)
agent = AgentRunner(agent_worker)

Gives no error and works well

logan-markewich commented 3 months ago

Yea the solution here is to use a react agent, function calling worker is only for llms that implement function calling I'm their api (openai, anthropic, etc)