Open yaoyonghuo opened 3 months ago
I saw that the official documentation of IPex-LLM supports Centos7, and I would like to know where the problem lies. Then I tried using Docker and encountered a timeout issue when pulling image. Is there any other way to pull image?
Thank you for your answer.I have successfully pulled the image,and I found that vllm is unable to quality the model,and I wonder if there are any other methods to implement it?
I'm glad to hear you successfully pulled the image. I noticed you mentioned that vllm is unable to "quality the model." Could you please clarify what you mean by "quality the model"? Additionally, could you provide the steps you took when encountering this issue? This information will help me understand the problem better and offer more accurate assistance.
The model I am using is chatglm4,and my device can only use quantified large language models such as INT4.
Could you provide the command to reproduce the issue?
Can ipex-llm-0.43.1 run on Centos7.9? I encountered an error:Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.Error: Failed to load the llama dynamic library.