ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
18.23k stars 1.86k forks source link

卡在Loading checkpoint shards: 100%,不报错 #818

Closed H-Justus closed 1 year ago

H-Justus commented 1 year ago

提交前必须检查以下项目

问题类型

模型推理

基础模型

LLaMA-7B

操作系统

Linux

详细描述问题

base_model = LlamaForCausalLM.from_pretrained(
        args.base_model,
        load_in_8bit=args.load_in_8bit,
        torch_dtype=load_type,
        low_cpu_mem_usage=True,
        device_map='auto',
        )

Loading checkpoint shards: 100%不报错也不继续,一直在base_model里,显卡有占用

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况

运行日志或截图

A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
USE_MEM_EFF_ATTENTION:  True
STORE_KV_BEFORE_ROPE: False
Apply NTK scaling with ALPHA=1.0

Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards:  50%|█████     | 1/2 [00:03<00:03,  3.66s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:05<00:00,  2.52s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:05<00:00,  2.69s/it]
github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 1 year ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.