Open Mary-Sam opened 6 months ago
this code is working for me.
https://github.com/triton-inference-server/tensorrtllm_backend/issues/332#issuecomment-2063243340
https://github.com/npuichigo/openai_trtllm/issues/30#issuecomment-2139994778
duplicate #30
@dongs0104 It really works! Thank you very much! And maybe do you know why special tokens are displayed in the generated text? And also the text is never generated until the end of the sentence
I have converted Mixtral to TensoRT and I am trying to use your repository to integrate with OpenAI. I'm using the template history_template_llama3.liquid. When I run your example code for interacting with the model (openai_completion.py and openai_completion_stream.py)
If I contact triton directly via the http protocol, then I receive the following response to the same request:
"text_output":"to the moon and back.\n\nThe story begins with a young boy named Neil Armstrong who loved to explore and learn about the world around him. He was fascinated by the stars and the moon and dreamed of one day going to space"
How do I add all the spaces as in http protocol?