Open juanps90 opened 9 months ago
I can confirm various issues with GPTQ and Lora - I have tested all available drivers and cuda combinations I have also used Exllama docker.
I either have the issue reported above or this:
RuntimeError: probability tensor contains either inf
, nan
or element < 0
Interestingly above error occurs when I try inference on long context and forget to put rope base running with Transformers as well.
This appears to be related to CodeLLaMA34B as its 13B variant works with LoRA and about 13K context (haven't tried more).
I an confirm this issue doesn't replicate in EXL2 LorA implementation, so I don't think its worth to troubleshoot.
I am getting this error when trying to do inference with CodeLLaMA34B from The-Bloke + a LoRA trained on the same model using alpaca_lora_4bit.
Commenting out the generator.lora line works.
Hardware is dual RTX 3090 but I'm keeping context length low to a few tokens so that I can test with a single card, here's the output when running a single card, very low context length:
Also