If i load it under 1 GPU, it works perfectly with 2k context + 8bit
But if I use autosplit under 2 GPUs as 8k context, it responses in nonsense
Is there some option that I should tick to make it work? Just updated to the latest version of text gen and L2 version of airoboros works fine with autosplit on
Instruct:
1 GPU log:
Input: hello
Output: Hello! I'm here to help you with any questions or tasks you may have. What can I assist you with today?
Autosplit log:
Input: hello
Output: The string to the text-to-text model 1001
Describe the bug
Im using this model: https://huggingface.co/LoneStriker/airoboros-70b-3.3-2.4bpw-h6-exl2
If i load it under 1 GPU, it works perfectly with 2k context + 8bit But if I use autosplit under 2 GPUs as 8k context, it responses in nonsense
Is there some option that I should tick to make it work? Just updated to the latest version of text gen and L2 version of airoboros works fine with autosplit on
Is there an existing issue for this?
Reproduction
latest version of text gen load this model: https://huggingface.co/LoneStriker/airoboros-70b-3.3-2.4bpw-h6-exl2 w/ autosplit and 8k context
Screenshot
No response
Logs
System Info