marella / ctransformers

Python bindings for the Transformer models implemented in C/C++ using GGML library.
MIT License
1.76k stars 137 forks source link

Does ctransformers boost the inference speed in llm inference? #202

Open pradeepdev-1995 opened 4 months ago

pradeepdev-1995 commented 4 months ago

I have converted my finetuned hugging face model to .gguf format and triggered the inference with ctransformers. I am using a CUDA GPU machine. But i did not observe any kind of inference speed improvement after the inference by ctransformers. Observing the same latency in transformer based infernce and ctransformer based inference.