airockchip / rknn-llm

Other
417 stars 36 forks source link

Expection while converting model #18

Open dic1911 opened 7 months ago

dic1911 commented 7 months ago

and there's nothing helpful in the error output :(

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
Optimizing model: 100%| 24/24 [04:37<00:00, 11.55s/it]
Converting model: 100%| 291/291 [00:00<00:00, 1415942.53it/s]
Catch exception when converting model!
Export model failed!

(note. model is from https://huggingface.co/Qwen/Qwen1.5-1.8B)

fydeos-alex commented 7 months ago

which step? Did you name the export model?

dic1911 commented 7 months ago

@fydeos-alex wdym? I have successfully convert other models (Qwen1.5 0.5B, Karasu 1.1B and TinyLlama 1.1B) and I just simply do the same for the 1.8B model mentioned above.

Note that for all the attempts only Qwen 0.5B model actually worked at runtime, others always failed to load instantly without further info (ex. llama_init_from_gpt_params: error: failed to load model '../../karasu_jp.rkllm').

fydeos-alex commented 7 months ago

Well, I have successfully converted the Qwen and Qwen1.5 on Ubuntu and run them on rk3588. And you should give more information about your convert platform and runtime platform. 🤗

dic1911 commented 7 months ago

Yeah, I just tried to convert the Qwen model from another machine and it worked without issue, but still other mentioned models are not working (which should work according to readme), anyway I guess I should open another issue for broken support.