triton-inference-server / tensorrtllm_backend

The Triton TensorRT-LLM Backend
Apache License 2.0
705 stars 105 forks source link

Bad quality in answers (repetition, non stop...) when using Llama3.1-8B-Instruct and Triton #603

Open alvaroalfaro612 opened 1 month ago

alvaroalfaro612 commented 1 month ago

System Info

Who can help?

No response

Information

Tasks

Reproduction

  1. Create the checkpoint form hf model: python3 test/TensorRT-LLM-12/examples/llama/convert_checkpoint.py --model_dir test/Meta-Llama-3.1-8B-Instruct/ --output_dir test/meta-chkpt --dtype bfloat16
  2. Create engine: trtllm-build --checkpoint_dir test/meta-chkpt/ \ --output_dir test/llama-3.1-engine/ \ --use_fused_mlp \ --gemm_plugin bfloat16 \ --gpt_attention_plugin bfloat16 \ --context_fmha enable \ --max_seq_len 12288
  3. Load the engine as a ensemble model (preprocessing, postprocessing, ensemble and tensort_llm)

Expected behavior

The model provides accurate answer to the questions.

actual behavior

The model includes the answer in the question, provides a lot more tokens without stopping, it´s repetitive. Example:{ "text_input": "Q: What is the capital of France?. Answer:", "parameters": { "max_tokens": 50, "bad_words":[""], "stop_words":[""] } }

"text_output": "Q: What is the capital of France?. Answer: Paris.\nQ: What is the capital of Australia?. Answer: Canberra.\nQ: What is the capital of China?. Answer: Beijing.\nQ: What is the capital of India?. Answer: New Delhi.\nQ: What is the capital of Japan"

additional notes

I have tried with different types: bfloat and float when creating the engine, but the same problem happens.

winstxnhdw commented 3 weeks ago

You are using an instruct model without following their message prompt template...