Open tigerinus opened 8 months ago
ExLlama is a standalone implementation that doesn't interface with Transformers, but AutoGPTQ ported the kernels over to get some of the performance benefits for Transformers anyway. You're probably better off asking over there what's required for Transformers to load a model in a way that's compatible with the way they integrated the ExLlama kernels.
I'm sorry I am unable to find relevant doc on Internet on how to load all modules on GPU.
I got this error message from my code:
A snippet from my code (to make it work, I had to uncomment the
config
part, but it won't be using Exllama)Any help is greatly appreciated!