Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
I'm using a model fine-tuned based on qwen2 (qwen1.5).
When I use bigdl to load the model and execute the generate method, python prompts an error
I'm run with a Intel(R) Data Center GPU Flex 170
model load via
AutoModelForCausalLM.from_pretrained(**model_name_or_path**, load_in_4bit=True, optimize_model=True, trust_remote_code=True, use_cache=True)
The following is the error message
I'm using a model fine-tuned based on qwen2 (qwen1.5). When I use bigdl to load the model and execute the generate method, python prompts an error I'm run with a Intel(R) Data Center GPU Flex 170 model load via
AutoModelForCausalLM.from_pretrained(**model_name_or_path**, load_in_4bit=True, optimize_model=True, trust_remote_code=True, use_cache=True)
The following is the error message