TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
I find starcoder uses the "gelu_pytorch_tanh" activation function instead of classic gelu as described here
I checked the code from the latest main branch, but I cannot find gelu_pytorch_tanh defined anywhere.
It seems that tensorRT-LLM did not adapt to the starcoder model. But there are instruction readme here, I wonder if you have tested on starcoder before?
System Info
model: bigcode/starcoderbase-3b Python 3.10.12 CUDA Version: 12.2 tensorrt_llm version: 0.8.0.dev2024011601 basic image: nvcr.io/nvidia/tritonserver:23.12-trtllm-python-py3
Who can help?
@juney-nvidia @kaiyux @Shixiaowei02 @Eddie-Wang1120
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
follow the guide to convert the weigths of starcoder.
Expected behavior
convert the weigths of starcoder successfully.
actual behavior
When I followed the instruction as described here, an error happened as below:
the command is
I find starcoder uses the "gelu_pytorch_tanh" activation function instead of classic gelu as described here
I checked the code from the latest main branch, but I cannot find gelu_pytorch_tanh defined anywhere.
It seems that tensorRT-LLM did not adapt to the starcoder model. But there are instruction readme here, I wonder if you have tested on starcoder before?
additional notes
none