intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.66k stars 1.26k forks source link

MiniCPM-V-2 output error #11611

Open aitss2017 opened 3 months ago

aitss2017 commented 3 months ago

img_v3_02ct_364c6b00-4aa3-4f66-9d9b-ac908e08ba6g

ipex-llm: 2.1.0b20240714 transformers: 4.41.2 Driver: 32.0.101.5762 OS: Win11 23H2

jenniew commented 2 months ago

Please check this example and try again: https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2