ggerganov / llama.cpp

LLM inference in C/C++
MIT License
65.71k stars 9.43k forks source link

Feature Request: echo=true in llama-server #8942

Open ciaran-regan-ie opened 1 month ago

ciaran-regan-ie commented 1 month ago

Prerequisites

Feature Description

The llama-server allows api calls with logprobs=1, but it would be very nice to also include the option to set echo=True, as was available for older OpenAI models such as davinci-002.

Motivation

This would allow for a number of interesting possibilities such as inferring the likelihood of a prompt given a completion, as done in this project.

OpenAI depreciates the echo option because it's too useful :) would be great to have it back in llama.cpp.

Possible Implementation

No response

sragrawal commented 1 month ago

This would be similar to support --all-logits from llama-perplexity right? This would be very useful in the server allowing us to use the server for benchmarking as well.

kaetemi commented 3 weeks ago

I have a use case for this as well.