Open jamesbraza opened 1 year ago
What model are you using? If you use a model powered by TGI or a TGI endpoint directly, this is the error you should get:
# for "bigcode/starcoder"
huggingface_hub.inference._text_generation.ValidationError: Input validation error: `inputs` tokens + `max_new_tokens` must be <= 8192. Given: 37500 `inputs` tokens and 20 `max_new_tokens`
(In any case, I think this should be handled server-side as the client cannot know this information before-hand.)
The model I was using is https://huggingface.co/microsoft/biogpt with text_generation
. It's supported per InferenceClient list_deployed_models
, but it's not powered by TGI, just transformers
.
I think this request is to add more info to that failure message, so one can better understand next steps. I am not sure where that failure message is being made, if you could link me to a line in a Hugging Face repo, I just want to use some f-strings in its message
I transferred the issue to our api-inference-community
repo. I am not sure it will ever get fixed as the focus is more on TGI-served models at the moment. Still pinging @Narsil just in case: would it be possible to retrieve the maximum input length when sending a sequence to a transformers.pipeline
-powered model?
I saw that it's implemented here (private repo) but don't know how this value could be easily accessible.
Thanks @Wauplin for putting this in the right place!
No there's not, truncating is the only simple option.
The reason is that the API for non TGI models use the pipeline, which raises an exception that would need to be parsed (which would break everytime the message gets updated, which while it shouldn't happen that much is still very much possilbe).
Using truncation should solve most issues for most users, therefore I don't think we should change anything here.
@jamesbraza Any reason truncation doesn't work for you ?
Thanks for the response @Narsil ! Appreciate the details provided too.
which raises an exception that would need to be parsed (which would break everytime the message gets updated, which while it shouldn't happen that much is still very much possilbe).
So there's a few techniques to parsing exceptions, one of which is parsing the error string, which I agree is indeed brittle to change. A more maintainable route involves custom Exception
subclasses:
class InputTooLong(ValueError):
def __init__(self, msg: str, actual_length: int, llm_limit: int) -> None:
super().__init__(msg)
self.actual_length = actual_length
self.llm_limit = llm_limit
try:
raise InputTooLongError(
"Input is too long for this model (removed rest for brevity)",
actual_length=len(prompt_tokens),
llm_limit=llm.limit
)
except Exception as exc:
pass # BadRequestError raised here can contain the length in metadata
This method is readable and is independent of message strings.
Any reason truncation doesn't work for you ?
Fwiw, this request is about just giving more information on the error being faced. I think it would be good if the response had the actual tokens used and the LLM's limit on tokens, and then one can decide on using truncation. I see truncation as a downstream choice after seeing a good error message, and thus using truncation or not is tangent to this issue.
To answer your question, truncation seems like a bad idea to me, because it can lead to unexpected failures caused by truncating away important parts of the prompt. I use RAG, so prompts can become quite big. The actual prompt comes last, which could be truncated away
The actual prompt comes last, which could be truncated away
It's always left truncated for generative models, therefore only loosing the initial part of a prompt. If you're using RAG with long prompts and really want to know what gets used, then just get a tokenizer to count your prompt lengths, no ?
Hello @Narsil thanks for the response! Good to know truncation is left truncated usually.
then just get a tokenizer to count your prompt lengths, no ?
Yeah this is a workaround. Though it requires one to:
However, Hugging Face already knows the input size and max allowable size because it states "Input is too long for this model". So I think it's much easier for Hugging Face 'server side' to just include this information in the thrown Exception
, either within the message or as an attribute.
Knowing the length in tokens is not really useful if you don't know how to modify the original prompt in order to modify those tokens right ?
Pick a tokenizer model and instantiate it locally (which may not be possible, depending on memory)
What kind of hardware are we talking here ? Tokenizers are extremely tiny.
When using
InferenceClient.text_generation
with a really longprompt
, you get an error:Can we have this message contain more info: