liucongg / ChatGLM-Finetuning

基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
2.66k stars 294 forks source link

chatglm3 单卡训练报错了 #131

Open eanfs opened 9 months ago

eanfs commented 9 months ago

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 14.56 GiB total capacity; 12.31 GiB already allocated; 486.50 MiB free; 13.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

eanfs commented 9 months ago

卡是T4

liucongg commented 9 months ago

显存不足,建议换卡或者采用qlora当时微调

eanfs commented 9 months ago

显存不足,建议换卡或者采用qlora当时微调

只有这个卡了, 能说下采用qlora微调 的方案吗

sevenandseven commented 8 months ago

显存不足,建议换卡或者采用qlora当时微调

只有这个卡了, 能说下采用qlora微调 的方案吗

你好,想请问下,怎么使用qlora做chatglm3-6b的微调,有代码可以分享吗?