intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.27k stars 1.23k forks source link

try to test multi xpu with example #11091

Open K-Alex13 opened 1 month ago

K-Alex13 commented 1 month ago

image Due to the huggingface download problem, I download the model from following link. https://huggingface.co/Qwen/Qwen1.5-14B-Chat/tree/main Replace the model with the model's URL. And the issue comes up. Not sure what is going wrong Please help me.

plusbang commented 1 month ago

Hi, @K-Alex13 , if you have downloaded the model from https://huggingface.co/Qwen/Qwen1.5-14B-Chat/tree/main, please just replace 'Qwen/Qwen1.5-14B-Chat' with your local model folder path here(https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/example/GPU/Deepspeed-AutoTP/run_qwen_14b_arc_2_card.sh#L38).

K-Alex13 commented 1 month ago

Yes I already use this method, the error comes up after the process you mentioned

K-Alex13 commented 1 month ago

And the error mentioned missing file are also not in the Qwen/Qwen1.5-14B-Chat files.

plusbang commented 1 month ago

And the error mentioned missing file are also not in the Qwen/Qwen1.5-14B-Chat files.

If model.safetensors.index.json is not in your local folder, such error message would still occur. You may need to check whether all model files are available and complete in your local model folder.

K-Alex13 commented 1 month ago

image what is the function of low-bit here. I think it is 4 bit initial that the gpu needed will less than 16G so I do not know if this use two gpu here. or can you please tell how to check the gpu usage during inference?

K-Alex13 commented 1 month ago

image Why gpu 0 did not inference the results and gpu1 did.

plusbang commented 1 month ago

what is the function of low-bit here. I think it is 4 bit initial that the gpu needed will less than 16G so I do not know if this use two gpu here. or can you please tell how to check the gpu usage during inference?

Why gpu 0 did not inference the results and gpu1 did.

K-Alex13 commented 1 month ago

image still not working

plusbang commented 1 month ago

image still not working

According to your screenshot, maybe you could try sudo apt install libmetee and sudo apt install libmetee-dev.

K-Alex13 commented 1 month ago

how to use them

plusbang commented 1 month ago

how to use them

Sorry but I have no idea about the meaning of 'them'. ME TEE Library (libmetee/libmetee-dev) is a C library to access CSE/CSME/GSC firmware via, and xpu-smi tool seems to need. Could you use xpu-smi now?

K-Alex13 commented 1 month ago

I install the packages you said above and try to us xpu-smi, same error comes up

K-Alex13 commented 1 month ago

By the way I want to know if this is not a method which use two gpu as a bigger gpu to inference message. It just put model in two different gpu separately and inference separately?

plusbang commented 1 month ago

I install the packages you said above and try to us xpu-smi, same error comes up

Maybe you could try these steps?

sudo apt-get autoremove libmetee-dev
sudo apt-get autoremove libmetee
sudo apt-get install libmetee
sudo apt-get install libmetee-dev
sudo apt-get install xpu-smi

By the way I want to know if this is not a method which use two gpu as a bigger gpu to inference message. It just put model in two different gpu separately and inference separately?

The model is separated and put to two GPUs, each GPU need less memory to inference. In this way, you could treat two GPUs as a bigger one.