NVIDIA / TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
https://nvidia.github.io/TensorRT-LLM
Apache License 2.0
8.3k stars 928 forks source link

the inference results of TRT-LLM are different from the HF model results(chatglm3-6b tools call mode) #1561

Open FanZhang91 opened 4 months ago

FanZhang91 commented 4 months ago

System Info

NVIDIA GeForce 4090 Gpu

Who can help?

No response

Information

Tasks

Reproduction

    Use HF function just like model.chat(tokenizer, query, history=history) to inference our model, the result is correct. But use tensorrt_llm to inference our model , the result is abnormal。
    The execute script in tensorrt_llm framework is just like this: "python3 ../run.py --input_text "What's new between ChatGLM3-6B and ChatGLM2-6B?" \
              --max_output_len 50 \
              --tokenizer_dir chatglm3_6b \
              --engine_dir trt_engines/chatglm3_6b/fp16/1-gpu")。
       There seems to be a big difference  between ModelRunner.generate() method in tensorrt_llm and .chat() method in HF framework. Can you give me some suggestions to solve this problem?

Expected behavior

null

actual behavior

null

additional notes

null

byshiue commented 4 months ago

Could you check that you have aglined the inputs (the input ids before entering the transformer layer are same) and the sampling/beamsearch arguments like topk, topp, temeperature, etc.

FanZhang91 commented 4 months ago

Could you check that you have aglined the inputs (the input ids before entering the transformer layer are same) and the sampling/beamsearch arguments like topk, topp, temeperature, etc.

yes, i keep input_ids(tools prior definition + query) and sampling/beamsearch arguments the same and output results are different