ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
17.98k stars 1.84k forks source link

怎么把hf格式的模型转化为.pth格式呢? #830

Closed yyl199655 closed 10 months ago

yyl199655 commented 11 months ago

提交前必须检查以下项目

问题类型

模型转换和合并

基础模型

LLaMA-7B

操作系统

Linux

详细描述问题

# 请在此处粘贴运行代码(如没有可删除该代码块)

我想用llama.cpp部署推理模型,我看量化模型仅支持.pth的,我全参数训练了llama,请问怎么把hf的模型转化为量化所需要的.pth格式呢 image

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况

运行日志或截图


# 请在此处粘贴运行日志
```无
ymcui commented 11 months ago

不需要转换了,最新版llama.cpp已支持直接转换HF格式的模型。

可以参考我们二代的wiki,同样适用:https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/llamacpp_zh

yyl199655 commented 11 months ago

image 那这个zh-models/7B/文件夹里面还是放的就是.bin和.json文件吧,然后zh-models文件夹下放tokenizer.model么?

ymcui commented 11 months ago

不需要了,都放在同一个文件夹就行了,例如zh-models/7B。 目录里的文件你可以参考我们的二代模型:https://huggingface.co/ziqingyang/chinese-alpaca-2-13b/tree/main

yyl199655 commented 11 months ago

我按照教程使用了 -ngl 1, image

为什么没显示使用gpu image

ymcui commented 11 months ago
  1. 你编译llama.cpp的时候是和cuBLAS一起编译的吗?编译前是否先执行了make clean-ngl 1只是示例(只适用于Apple芯片),如果你要offload更多层,可以指定更大的值,例如-ngl 99表明全部加载到GPU里。
  2. 你截图的脚本是我们给二代模型设计的,运行指令你要参考一代的wiki。 例如:
    ./main -m zh-models/7B/ggml-model-q4_0.bin --color -f prompts/alpaca.txt -ins -c 2048 --temp 0.2 -n 256 --repeat_penalty 1.1

    我不知道你运行的是什么模型,如果是LLaMA模型,则不需要-p加载指令模板,但也请注意LLaMA不是拿来做聊天的

yyl199655 commented 11 months ago

我直接按照教程,直接make的,运行的是llama-7B模型 没用来聊天,我就想输入一条指令输出结果

ymcui commented 11 months ago

直接make并不会启用GPU。你需要和cuBLAS一起编译。

make clean
make LLAMA_CUBLAS=1
yyl199655 commented 11 months ago

会有如下的错误哎~ p]# make LLAMA_CUBLAS=1 I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include I CXXFLAGS: -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include I LDFLAGS: -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib I CC: cc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3) I CXX: g++ (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)

cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c ggml.c -o ggml.o g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c llama.cpp -o llama.o g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c common/common.cpp -o common.o g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c common/console.cpp -o console.o g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c common/grammar-parser.cpp -o grammar-parser.o cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c -o k_quants.o k_quants.c k_quants.c:182:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] static float make_qkx1_quants(int n, int nmax, const float restrict x, uint8_t restrict L, float * restrict the_min, ^~~~ nvcc --forward-unknown-to-host-compiler -use_fast_math -arch=native -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITERATION=2 -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -Wno-pedantic -c ggml-cuda.cu -o ggml-cuda.o nvcc fatal : Value 'native' is not defined for option 'gpu-architecture' make: *** [ggml-cuda.o] Error 1

ymcui commented 11 months ago

https://github.com/ggerganov/llama.cpp#cublas 自行排查一下吧。或者到llama.cpp里搜一下是否有类似问题。

github-actions[bot] commented 11 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 10 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.