replicate / cog-triton

A cog implementation of Nvidia's Triton server
Apache License 2.0
11 stars 0 forks source link

Stop sequences fail for some sequences #48

Open joehoover opened 3 months ago

joehoover commented 3 months ago

Observed Behavior

Some stop sequence inputs (e.g. "}) trigger an error:

Prediction failed.

E2102 TritonTokenizerError: Tokenizer error: in ensemble 'ensemble', Failed to process the request(s) for model instance 'preprocessing_0_126', message: ValueError: To standardize tokenizer behavior, we prepend '!' to the string representation of each stop sequence. We then strip the corresponding first token from the stop sequence IDs. However, the first token of the stop sequence IDs was not '{arbitrary_start_sequence_id}', which suggests there is a problem with the tokenizer that you are using. At: /src/triton_model_repo/preprocessing/1/model.py(287): _to_word_list_format /src/triton_model_repo/preprocessing/1/model.py(182): execute

Expected Behavior

All stop sequence inputs should be handled and applied, such that generation stops when those sequences are encountered.

Reproduce

These request against llama-3-70b triggers the error reliably:

https://replicate.com/p/1w0ht542kdrgj0cg7c2vpkr4a0

This request ran against:

joehoover commented 3 months ago

stop_sequence = "." also throws this error.