ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
18.23k stars 1.86k forks source link

33B不量化预训练需要多大显存的GPU,量化后预训练要多大显存的GPU #810

Closed yaochao1 closed 1 year ago

yaochao1 commented 1 year ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

LLaMA-33B

操作系统

Linux

详细描述问题

# 请在此处粘贴运行代码(如没有可删除该代码块)

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况

1

运行日志或截图

# 请在此处粘贴运行日志
xuexidi commented 1 year ago

deepspeed stage 2, offload, 大约70G显存

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 1 year ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.