Open PeiqinSun opened 1 year ago
@PeiqinSun Did you train LLaMa 7b on one 2080ti and did your fine tune script work well? I have 8GB of VRAM, will I be able to train LLaMa at home on Alpaca dataset?
@PeiqinSun And I would like to know the version of cuda and cudnn on your devices. I'm trying to run finetune.py , but I get an error when Loading cuda kernel... I think this error is due to the fact that I have too new version of CUDA, I would like to know your configuration
Thanks for your attention. please raise an issue in our repo to better trace your problems. My configuration is:
python3.8
cuda 11.7
@PeiqinSun Did you train LLaMa 7b on one 2080ti and did your fine tune script work well? I have 8GB of VRAM, will I be able to train LLaMa at home on Alpaca dataset?
Yes, these experiment both be tested before release.
@PeiqinSun ok, i try to test script with other version of cuda. thank u for the answer :)
Feel free to give me feedback at any time, and I will reply promptly.
Anton Demin @.***> 于2023年5月1日周一 19:12写道:
@PeiqinSun https://github.com/PeiqinSun ok, i try to test script with other version of cuda. thank u for the answer :)
— Reply to this email directly, view it on GitHub https://github.com/tloen/alpaca-lora/issues/345#issuecomment-1529590698, or unsubscribe https://github.com/notifications/unsubscribe-auth/AI7577E2ZZLA6UWWRLCFJQTXD6LCBANCNFSM6AAAAAAW74TUIU . You are receiving this because you were mentioned.Message ID: @.***>
We try to implement 4bit-qlora, thanks to the optimized kernel implementation of back-propagation, the fine-tuning speed is similar to 8-bit lora at present. Welcome to use and issue: https://github.com/megvii-research/Sparsebit/tree/main/large_language_models/alpaca-qlora