Open gotzmann opened 6 days ago
I've tried to use DoRA with LLaMA-Factory, but got this error
self and mat2 must have the same dtype, but got BFloat16 and Float
Haven't any problems with plain LoRA :(
No response
Operating System: Linux-5.15.0-100-generic-x86_64-with-glibc2.35 Python version: 3.10.12 PyTorch version: 2.3.1+cu121 CUDA version: 12.1 Triton version: 2.3.1 Transformers version: 4.44.2
@gotzmann You can use compute_type = fp32 for now(in llama factory). There's similar problem with full fine-tuning.
🐛 Describe the bug
I've tried to use DoRA with LLaMA-Factory, but got this error
self and mat2 must have the same dtype, but got BFloat16 and Float
Haven't any problems with plain LoRA :(
Reproduce
No response
Versions
Environment Report:
Operating System: Linux-5.15.0-100-generic-x86_64-with-glibc2.35 Python version: 3.10.12 PyTorch version: 2.3.1+cu121 CUDA version: 12.1 Triton version: 2.3.1 Transformers version: 4.44.2