intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.55k stars 1.25k forks source link

Qwen1.5-4b and Qwen1.5-7b model cannot be loaded correctly in ipex-llm version 20240522 #11109

Open grandxin opened 4 months ago

grandxin commented 4 months ago

I save qwen1.5-4b and 7b int4 model in my computer, when loaded these models, there are some errors:

Some weights of the model checkpoint at ./models/qwen1.5-4b were not used when initializing Qwen2ForCausalLM: ['model.layers.0.self_attn.k_proj.bias', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.0.self_attn.q_proj.bias', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.0.self_attn.v_proj.bias', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.1.self_attn.k_proj.bias', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.1.self_attn.q_proj.bias', 'model.layers.1.self_attn.q_proj.weight', 'model.layers.1.self_attn.v_proj.bias', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.10.self_attn.k_proj.bias', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.10.self_attn.q_proj.bias', 'model.layers.10.self_attn.q_proj.weight', 'model.layers.10.self_attn.v_proj.bias', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.11.self_attn.k_proj.bias', 'model.layers.11.self_attn.k_proj.weight', 'model.layers.11.self_attn.q_proj.bias', 'model.layers.11.self_attn.q_proj.weight', 'model.layers.11.self_attn.v_proj.bias', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.12.self_attn.k_proj.bias', 'model.layers.12.self_attn.k_proj.weight', 'model.layers.12.self_attn.q_proj.bias', 'model.layers.12.self_attn.q_proj.weight', 'model.layers.12.self_attn.v_proj.bias', 'model.layers.12.self_attn.v_proj.weight', 'model.layers.13.self_attn.k_proj.bias', 'model.layers.13.self_attn.k_proj.weight', 'model.layers.13.self_attn.q_proj.bias', 'model.layers.13.self_attn.q_proj.weight', 'model.layers.13.self_attn.v_proj.bias', 'model.layers.13.self_attn.v_proj.weight', 'model.layers.14.self_attn.k_proj.bias', 'model.layers.14.self_attn.k_proj.weight', 'model.layers.14.self_attn.q_proj.bias', 'model.layers.14.self_attn.q_proj.weight', 'model.layers.14.self_attn.v_proj.bias', 'model.layers.14.self_attn.v_proj.weight', 'model.layers.15.self_attn.k_proj.bias', 'model.layers.15.self_attn.k_proj.weight', 'model.layers.15.self_attn.q_proj.bias', 'model.layers.15.self_attn.q_proj.weight', 'model.layers.15.self_attn.v_proj.bias', 'model.layers.15.self_attn.v_proj.weight', 'model.layers.16.self_attn.k_proj.bias', 'model.layers.16.self_attn.k_proj.weight', 'model.layers.16.self_attn.q_proj.bias', 'model.layers.16.self_attn.q_proj.weight', 'model.layers.16.self_attn.v_proj.bias', 'model.layers.16.self_attn.v_proj.weight', 'model.layers.17.self_attn.k_proj.bias', 'model.layers.17.self_attn.k_proj.weight', 'model.layers.17.self_attn.q_proj.bias', 'model.layers.17.self_attn.q_proj.weight', 'model.layers.17.self_attn.v_proj.bias', 'model.layers.17.self_attn.v_proj.weight', 'model.layers.18.self_attn.k_proj.bias', 'model.layers.18.self_attn.k_proj.weight', 'model.layers.18.self_attn.q_proj.bias', 'model.layers.18.self_attn.q_proj.weight', 'model.layers.18.self_attn.v_proj.bias', 'model.layers.18.self_attn.v_proj.weight', 'model.layers.19.self_attn.k_proj.bias', 'model.layers.19.self_attn.k_proj.weight', 'model.layers.19.self_attn.q_proj.bias', 'model.layers.19.self_attn.q_proj.weight', 'model.layers.19.self_attn.v_proj.bias', 'model.layers.19.self_attn.v_proj.weight', 'model.layers.2.self_attn.k_proj.bias', 'model.layers.2.self_attn.k_proj.weight', 'model.layers.2.self_attn.q_proj.bias', 'model.layers.2.self_attn.q_proj.weight', 'model.layers.2.self_attn.v_proj.bias', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.20.self_attn.k_proj.bias', 'model.layers.20.self_attn.k_proj.weight', 'model.layers.20.self_attn.q_proj.bias', 'model.layers.20.self_attn.q_proj.weight', 'model.layers.20.self_attn.v_proj.bias', 'model.layers.20.self_attn.v_proj.weight', 'model.layers.21.self_attn.k_proj.bias', 'model.layers.21.self_attn.k_proj.weight', 'model.layers.21.self_attn.q_proj.bias', 'model.layers.21.self_attn.q_proj.weight', 'model.layers.21.self_attn.v_proj.bias', 'model.layers.21.self_attn.v_proj.weight', 'model.layers.22.self_attn.k_proj.bias', 'model.layers.22.self_attn.k_proj.weight', 'model.layers.22.self_attn.q_proj.bias', 'model.layers.22.self_attn.q_proj.weight', 'model.layers.22.self_attn.v_proj.bias', 'model.layers.22.self_attn.v_proj.weight', 'model.layers.23.self_attn.k_proj.bias', 'model.layers.23.self_attn.k_proj.weight', 'model.layers.23.self_attn.q_proj.bias', 'model.layers.23.self_attn.q_proj.weight', 'model.layers.23.self_attn.v_proj.bias', 'model.layers.23.self_attn.v_proj.weight', 'model.layers.24.self_attn.k_proj.bias', 'model.layers.24.self_attn.k_proj.weight', 'model.layers.24.self_attn.q_proj.bias', 'model.layers.24.self_attn.q_proj.weight', 'model.layers.24.self_attn.v_proj.bias', 'model.layers.24.self_attn.v_proj.weight', 'model.layers.25.self_attn.k_proj.bias', 'model.layers.25.self_attn.k_proj.weight', 'model.layers.25.self_attn.q_proj.bias', 'model.layers.25.self_attn.q_proj.weight', 'model.layers.25.self_attn.v_proj.bias', 'model.layers.25.self_attn.v_proj.weight', 'model.layers.26.self_attn.k_proj.bias', 'model.layers.26.self_attn.k_proj.weight', 'model.layers.26.self_attn.q_proj.bias', 'model.layers.26.self_attn.q_proj.weight', 'model.layers.26.self_attn.v_proj.bias', 'model.layers.26.self_attn.v_proj.weight', 'model.layers.27.self_attn.k_proj.bias', 'model.layers.27.self_attn.k_proj.weight', 'model.layers.27.self_attn.q_proj.bias', 'model.layers.27.self_attn.q_proj.weight', 'model.layers.27.self_attn.v_proj.bias', 'model.layers.27.self_attn.v_proj.weight', 'model.layers.28.self_attn.k_proj.bias', 'model.layers.28.self_attn.k_proj.weight', 'model.layers.28.self_attn.q_proj.bias', 'model.layers.28.self_attn.q_proj.weight', 'model.layers.28.self_attn.v_proj.bias', 'model.layers.28.self_attn.v_proj.weight', 'model.layers.29.self_attn.k_proj.bias', 'model.layers.29.self_attn.k_proj.weight', 'model.layers.29.self_attn.q_proj.bias', 'model.layers.29.self_attn.q_proj.weight', 'model.layers.29.self_attn.v_proj.bias', 'model.layers.29.self_attn.v_proj.weight', 'model.layers.3.self_attn.k_proj.bias', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.3.self_attn.q_proj.bias', 'model.layers.3.self_attn.q_proj.weight', 'model.layers.3.self_attn.v_proj.bias', 'model.layers.3.self_attn.v_proj.weight', 'model.layers.30.self_attn.k_proj.bias', 'model.layers.30.self_attn.k_proj.weight', 'model.layers.30.self_attn.q_proj.bias', 'model.layers.30.self_attn.q_proj.weight', 'model.layers.30.self_attn.v_proj.bias', 'model.layers.30.self_attn.v_proj.weight', 'model.layers.31.self_attn.k_proj.bias', 'model.layers.31.self_attn.k_proj.weight', 'model.layers.31.self_attn.q_proj.bias', 'model.layers.31.self_attn.q_proj.weight', 'model.layers.31.self_attn.v_proj.bias', 'model.layers.31.self_attn.v_proj.weight', 'model.layers.32.self_attn.k_proj.bias', 'model.layers.32.self_attn.k_proj.weight', 'model.layers.32.self_attn.q_proj.bias', 'model.layers.32.self_attn.q_proj.weight', 'model.layers.32.self_attn.v_proj.bias', 'model.layers.32.self_attn.v_proj.weight', 'model.layers.33.self_attn.k_proj.bias', 'model.layers.33.self_attn.k_proj.weight', 'model.layers.33.self_attn.q_proj.bias', 'model.layers.33.self_attn.q_proj.weight', 'model.layers.33.self_attn.v_proj.bias', 'model.layers.33.self_attn.v_proj.weight', 'model.layers.34.self_attn.k_proj.bias', 'model.layers.34.self_attn.k_proj.weight', 'model.layers.34.self_attn.q_proj.bias', 'model.layers.34.self_attn.q_proj.weight', 'model.layers.34.self_attn.v_proj.bias', 'model.layers.34.self_attn.v_proj.weight', 'model.layers.35.self_attn.k_proj.bias', 'model.layers.35.self_attn.k_proj.weight', 'model.layers.35.self_attn.q_proj.bias', 'model.layers.35.self_attn.q_proj.weight', 'model.layers.35.self_attn.v_proj.bias', 'model.layers.35.self_attn.v_proj.weight', 'model.layers.36.self_attn.k_proj.bias', 'model.layers.36.self_attn.k_proj.weight', 'model.layers.36.self_attn.q_proj.bias', 'model.layers.36.self_attn.q_proj.weight', 'model.layers.36.self_attn.v_proj.bias', 'model.layers.36.self_attn.v_proj.weight', 'model.layers.37.self_attn.k_proj.bias', 'model.layers.37.self_attn.k_proj.weight', 'model.layers.37.self_attn.q_proj.bias', 'model.layers.37.self_attn.q_proj.weight', 'model.layers.37.self_attn.v_proj.bias', 'model.layers.37.self_attn.v_proj.weight', 'model.layers.38.self_attn.k_proj.bias', 'model.layers.38.self_attn.k_proj.weight', 'model.layers.38.self_attn.q_proj.bias', 'model.layers.38.self_attn.q_proj.weight', 'model.layers.38.self_attn.v_proj.bias', 'model.layers.38.self_attn.v_proj.weight', 'model.layers.39.self_attn.k_proj.bias', 'model.layers.39.self_attn.k_proj.weight', 'model.layers.39.self_attn.q_proj.bias', 'model.layers.39.self_attn.q_proj.weight', 'model.layers.39.self_attn.v_proj.bias', 'model.layers.39.self_attn.v_proj.weight', 'model.layers.4.self_attn.k_proj.bias', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.4.self_attn.q_proj.bias', 'model.layers.4.self_attn.q_proj.weight', 'model.layers.4.self_attn.v_proj.bias', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.5.self_attn.k_proj.bias', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.5.self_attn.q_proj.bias', 'model.layers.5.self_attn.q_proj.weight', 'model.layers.5.self_attn.v_proj.bias', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.6.self_attn.k_proj.bias', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.6.self_attn.q_proj.bias', 'model.layers.6.self_attn.q_proj.weight', 'model.layers.6.self_attn.v_proj.bias', 'model.layers.6.self_attn.v_proj.weight', 'model.layers.7.self_attn.k_proj.bias', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.7.self_attn.q_proj.bias', 'model.layers.7.self_attn.q_proj.weight', 'model.layers.7.self_attn.v_proj.bias', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.8.self_attn.k_proj.bias', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.8.self_attn.q_proj.bias', 'model.layers.8.self_attn.q_proj.weight', 'model.layers.8.self_attn.v_proj.bias', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.9.self_attn.k_proj.bias', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.9.self_attn.q_proj.bias', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.9.self_attn.v_proj.bias', 'model.layers.9.self_attn.v_proj.weight']

but when I use ipex-llm version 20240520 or earlier version, all is well

MeouSker77 commented 4 months ago

we have made some breaking change on qwen-1.5's int4 checkpoint in 5.21 version, old int4 checkpoint(generated by ipex 0520 or eariler) cannot be loaded with new ipex-llm(0521 or later), please regenerate int4 checkpoint with ipex-llm 20240521 or later

grandxin commented 4 months ago

we have made some breaking change on qwen-1.5's int4 checkpoint in 5.21 version, old int4 checkpoint(generated by ipex 0520 or eariler) cannot be loaded with new ipex-llm(0521 or later), please regenerate int4 checkpoint with ipex-llm 20240521 or later

ok, got it. the new version has some improvements? such as quantization accuracy, or RAM?

MeouSker77 commented 4 months ago

ok, got it. the new version has some improvements? such as quantization accuracy, or RAM?

yes, there should be some improvements on speed and RAM, but not much

grandxin commented 4 months ago

ok, got it. the new version has some improvements? such as quantization accuracy, or RAM?

yes, there should be some improvements on speed and RAM, but not much

I regenerate qwen-7b int4 model and run it on my laptop(ultra 7 155H), but the "warm up" stage costs very long time(more than 5 minutes), do you have any advice?

MeouSker77 commented 4 months ago

I regenerate qwen-7b int4 model and run it on my laptop(ultra 7 155H), but the "warm up" stage costs very long time(more than 5 minutes), do you have any advice?

Did you set SYCL_CACHE_PERSISTENT=1? https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#runtime-configuration

grandxin commented 4 months ago

I regenerate qwen-7b int4 model and run it on my laptop(ultra 7 155H), but the "warm up" stage costs very long time(more than 5 minutes), do you have any advice?

Did you set SYCL_CACHE_PERSISTENT=1? https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#runtime-configuration

yes, i have set it I found that warm up speed is much faster in cpu mode(about 10-20s). but slower in xpu mode..

MeouSker77 commented 4 months ago

I found that warm up speed is much faster in cpu mode(about 10-20s). but slower in xpu mode..

CPU doesn't need JIT compilation, while gpu needs.

On CPU: load model -> quantization -> inference

On GPU: load model -> quantization -> JIT compilation -> inference. This JIT compilation is what we called warm up, and it costs about ten minutes.

set SYCL_CACHE_PERSISTENT=1 will store gpu JIT code on disk so that it won't need to compile again the second time you run it.

If you are using powershell, please use CMD instead.

Could you check whether C:\Users\<user name>\AppData\Roaming\libsycl_cache exists ? If exits, please delete it. Then set SYCL_CACHE_PERSISTENT=1 and run inference (this run will take a long time (about 10 minutes) because it needs to regenerate JIT code cache), after finish, you should see regenerated C:\Users\<user name>\AppData\Roaming\libsycl_cache. With cache, following inference should has no warm up. (set SYCL_CACHE_PERSISTENT=1 is still required)

grandxin commented 4 months ago

I found that warm up speed is much faster in cpu mode(about 10-20s). but slower in xpu mode..

CPU doesn't need JIT compilation, while gpu needs.

On CPU: load model -> quantization -> inference

On GPU: load model -> quantization -> JIT compilation -> inference. This JIT compilation is what we called warm up, and it costs about ten minutes.

set SYCL_CACHE_PERSISTENT=1 will store gpu JIT code on disk so that it won't need to compile again the second time you run it.

If you are using powershell, please use CMD instead.

Could you check whether C:\Users\<user name>\AppData\Roaming\libsycl_cache exists ? If exits, please delete it. Then set SYCL_CACHE_PERSISTENT=1 and run inference (this run will take a long time (about 10 minutes) because it needs to regenerate JIT code cache), after finish, you should see regenerated C:\Users\<user name>\AppData\Roaming\libsycl_cache. With cache, following inference should has no warm up. (set SYCL_CACHE_PERSISTENT=1 is still required)

ok,i will try, thank you very much. If libsycl_cache exists, even if I finish the infer process, restart and reload model, is there no need for a warm up?

MeouSker77 commented 4 months ago

If libsycl_cache exists, even if I finish the infer process, restart and reload model, is there no need for a warm up?

yes