Closed raindrop313 closed 1 year ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.
提交前必须检查以下项目
问题类型
模型推理
基础模型
Alpaca-33B
操作系统
Linux
详细描述问题
作者您好,目前我有八张V100,单卡的话无法跑33b的模型,直接用LlamaForCausalLM.from_pretrained()的方法会爆显存,有什么办法在导入的时候分布式的放在不同的gpu上呢?