Meituan-AutoML / MobileVLM

Strong and Open Vision Language Assistant for Mobile Devices
Apache License 2.0
1.04k stars 66 forks source link

Llama.cpp usage does not work with the latest v2-1.7B model #26

Closed l3utterfly closed 8 months ago

l3utterfly commented 9 months ago

Same error in this issue: https://github.com/Meituan-AutoML/MobileVLM/issues/14

Followed exact steps from https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/MobileVLM-README.md

The exact steps work for MobileVLM_1.7B, but not for v2

l3utterfly commented 9 months ago

Tested with MobileVLM_1.4B-Chat, the above conversion process fails with the same error.

YangYang-DLUT commented 8 months ago

The latest version of llama.cpp will be released soon. I will inform you once it is ready. 😸

YangYang-DLUT commented 8 months ago

The latest version of customized llama.cpp: https://github.com/XiaotaoChen/llama.cpp/tree/MobileVLM-PEG. While converting the projector, set the --projector-type to peg for MobileVLM v2. Any problem while using please inform us.