纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行
3.28k
stars
333
forks
source link
baichuan llm.model加载undefined symbol: cudaGraphInstantiateWithFlags, version libcudart.so.11.0 #293
Open
zhaoanbei opened 1 year ago
在baichuan model save后, llm.model加载报错undefined symbol: cudaGraphInstantiateWithFlags, version libcudart.so.11.0,llm.from_hf可以正常运行。 复现方式:
tokenizer = AutoTokenizer.from_pretrained(model_location, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_location, trust_remote_code=True)
from fastllm_pytools import llm new_model = llm.from_hf(model, tokenizer, dtype = "int8") # dtype支持 "float16", "int8", "int4" 可以运行。
====================================================================
new_model = llm.model(flm_path) undefined symbol: cudaGraphInstantiateWithFlags, version libcudart.so.11.0