Closed fishfree closed 3 months ago
THUDM/cogvlm2-llama3-chat-19B 换成你本地的绝对路径,记得看微调的readme,一堆参数要自己设置
THUDM/cogvlm2-llama3-chat-19B 换成你本地的绝对路径,记得看微调的readme,一堆参数要自己设置
So... what is the right local path exactly?
I'm getting the same error too when running
$ python cli_demo_multi_gpus.py
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 10.22it/s]
Traceback (most recent call last):
File "<REDACTED>/code/CogVLM2/basic_demo/cli_demo_multi_gpus.py", line 57, in <module>
model = load_checkpoint_and_dispatch(
File "<REDACTED>/miniconda3/envs/cogvlm/lib/python3.10/site-packages/accelerate/big_modeling.py", line 613, in load_checkpoint_and_dispatch
load_checkpoint_in_model(
File "<REDACTED>/miniconda3/envs/cogvlm/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1732, in load_checkpoint_in_model
raise ValueError(
ValueError: `checkpoint` should be the path to a file containing a whole state dict, or the index of a sharded checkpoint, or a folder containing a sharded checkpoint or the whole state dict, but got THUDM/cogvlm2-llama3-chat-19B.
Any help would be appreciated :)
System Info / 系統信息
NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0
/mnt/data/whang/miniconda3/envs/cogvlm2/bin/python
Python 3.11.9
git pull the latest repo.
Who can help? / 谁可以帮助到您?
python cli_demo_multi_gpus.py 时报错如下:
Information / 问题信息
Reproduction / 复现过程
git pull https://github.com/THUDM/CogVLM2 cd CogVLM2 conda create -n cogvlm2 python=3.11 -y conda activate cogvlm2 cd basic_demo python -m pip install -r requirements python -m pip install accelerate python cli_demo_multi_gpus.py
Expected behavior / 期待表现
当然是不报错、正常运行了。