I have converted my finetuned hugging face model to .gguf format and triggered the inference with ctransformers.
I am using a CUDA GPU machine.
But i did not observe any kind of inference speed improvement after the inference by ctransformers. Observing the same latency in transformer based infernce and ctransformer based inference.
I have converted my finetuned hugging face model to .gguf format and triggered the inference with ctransformers. I am using a CUDA GPU machine. But i did not observe any kind of inference speed improvement after the inference by ctransformers. Observing the same latency in transformer based infernce and ctransformer based inference.