triton-inference-server / tensorrtllm_backend

The Triton TensorRT-LLM Backend
Apache License 2.0
664 stars 96 forks source link

Fixed Whitespace Error in Streaming mode #423

Open enochlev opened 5 months ago

enochlev commented 5 months ago

When a TensorRTLLM is deployed with streaming mode tokens have no white spaces in between streaming chunks.

Link to Issue

This is because calling tokenizer.decode does a whitespace strip of the output and this functionality cannot be disabled. So we must manually look for those white spaces and add.

The standard way of going about this is holding tokens in cache until a space is detected, in which everything after the space is put again into cache.

The other suggested method decodes the tokenid text instead of the string text to look for a "" symbol

This PR should be able to fix these issues

dbuades commented 3 weeks ago

I confirm that this change fixes the issue. Thank you!