ggerganov / llama.cpp

LLM inference in C/C++
MIT License
65.07k stars 9.33k forks source link

Feature Request: introduce Tool Call API in server mode #9031

Open tybalex opened 1 month ago

tybalex commented 1 month ago

Prerequisites

Feature Description

In the server mode, user should be able to use the OAI api for tool calling just like they do to do tool calling for gpt-4. Today this is not supported yet: https://github.com/ggerganov/llama.cpp/blob/master/examples/server/utils.hpp#L394

Motivation

There are more and more open sourced small models(7b, 30b, 70b) out there that support tool calling. Is it possible that llama.cpp starts to support those? I also created my own tool calling models: https://huggingface.co/rubra-ai/Meta-Llama-3-8B-Instruct-GGUF, but it requires extra preprocess and post process steps to handle the function calling requests, so I have to create a fork tools.cpp and implemented custom logic.

I wonder if it is possible to create something like a standard tool calling template so that tool calling models can follow. Basically it needs to cover 3 things:

  1. convert tool calling output from model to OAI json format.
  2. convert OAI format input function definitions to proper system prompt of local model.
  3. convert OAI format input chat messages with role tool_call or tools to a proper format that a tool calling model can support.

Possible Implementation

No response

qnixsynapse commented 1 month ago

cc: @ngxson

ngxson commented 1 month ago

Tool calls is not just a chat template, so it's trivial to implement. The problem is that it must be done per model. I think we can base on the same infrastructure provided by https://github.com/ggerganov/llama.cpp/pull/5695

segmond commented 3 weeks ago

tool call is no longer optional or a fancy thing to have. An LLM without tool calling is not as useful. I look forward to this.

crashr commented 3 weeks ago

Exactly what @segmond wrote.