Closed yyl199655 closed 10 months ago
不需要转换了,最新版llama.cpp已支持直接转换HF格式的模型。
可以参考我们二代的wiki,同样适用:https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/llamacpp_zh
那这个zh-models/7B/文件夹里面还是放的就是.bin和.json文件吧,然后zh-models文件夹下放tokenizer.model么?
不需要了,都放在同一个文件夹就行了,例如zh-models/7B
。
目录里的文件你可以参考我们的二代模型:https://huggingface.co/ziqingyang/chinese-alpaca-2-13b/tree/main
我按照教程使用了 -ngl 1,
为什么没显示使用gpu
make clean
?-ngl 1
只是示例(只适用于Apple芯片),如果你要offload更多层,可以指定更大的值,例如-ngl 99
表明全部加载到GPU里。./main -m zh-models/7B/ggml-model-q4_0.bin --color -f prompts/alpaca.txt -ins -c 2048 --temp 0.2 -n 256 --repeat_penalty 1.1
我不知道你运行的是什么模型,如果是LLaMA模型,则不需要-p
加载指令模板,但也请注意LLaMA不是拿来做聊天的。
我直接按照教程,直接make的,运行的是llama-7B模型 没用来聊天,我就想输入一条指令输出结果
直接make
并不会启用GPU。你需要和cuBLAS一起编译。
make clean
make LLAMA_CUBLAS=1
会有如下的错误哎~ p]# make LLAMA_CUBLAS=1 I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include I CXXFLAGS: -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include I LDFLAGS: -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib I CC: cc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3) I CXX: g++ (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)
cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c ggml.c -o ggml.o
g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c llama.cpp -o llama.o
g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c common/common.cpp -o common.o
g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c common/console.cpp -o console.o
g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c common/grammar-parser.cpp -o grammar-parser.o
cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c -o k_quants.o k_quants.c
k_quants.c:182:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function]
static float make_qkx1_quants(int n, int nmax, const float restrict x, uint8_t restrict L, float * restrict the_min,
^~~~
nvcc --forward-unknown-to-host-compiler -use_fast_math -arch=native -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITERATION=2 -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -Wno-pedantic -c ggml-cuda.cu -o ggml-cuda.o
nvcc fatal : Value 'native' is not defined for option 'gpu-architecture'
make: *** [ggml-cuda.o] Error 1
https://github.com/ggerganov/llama.cpp#cublas 自行排查一下吧。或者到llama.cpp里搜一下是否有类似问题。
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.
提交前必须检查以下项目
问题类型
模型转换和合并
基础模型
LLaMA-7B
操作系统
Linux
详细描述问题
我想用llama.cpp部署推理模型,我看量化模型仅支持.pth的,我全参数训练了llama,请问怎么把hf的模型转化为量化所需要的.pth格式呢![image](https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/34408370/d65d5986-a160-4013-91bc-c379e1cc1126)
依赖情况(代码类问题务必提供)
无
运行日志或截图