intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.73k stars 1.27k forks source link

ValueError: If `eos_token_id` is defined, make sure that `pad_token_id` is defined #12371

Closed fanlessfan closed 1 week ago

fanlessfan commented 2 weeks ago

Hello,

I followed the instruction from the link below and got error in the last step when running the demo.py. I have I-13700K with iGPU.

Could you any one help with the error?

thx

https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/install_linux_gpu.md

fanlessfan commented 1 week ago

not resolved, but Ollama works