-
### Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md)…
-
# Feature Description
Like Phi is supported, it would great to have this Mistral level 2b model ggufable.
# Motivation
SOTA 2b model, a piece of art, read how they made it:
https://she…
-
when uses code like this:
```
from FlagEmbedding import LayerWiseFlagLLMReranker
reranker = LayerWiseFlagLLMReranker('/path/bge-reranker-v2-minicpm-layerwise', use_fp16=True)
score = rerank…
-
I am a member of the openbmb lab, and I would like to add chartRTX support for openbmb/minicpm, which is one of the best mini models in China. How can I do this?
-
Dear author, hello. I am a staff member of Openbmb and I would like to increase LMs support for our open source community model. However, I am unsure how to proceed. Can you please let me know
-
```
CUDA_VISIBLE_DEVICES=2 python run.py --data MMMU_TEST --model MiniCPM-Llama3-V-2_5 --verbose
Did not detect the .env file at /data/mm/VLMEvalKit/.env, failed to load.
Did not detect the .env fi…
-
## ⚙️ Request New Models
- Link to an existing implementation (e.g. Hugging Face/Github):
- Is this model architecture supported by MLC-LLM? (the list of [supported models](https://llm.mlc.ai/do…
-
看到了支持单个图片调用模型进行推理,是否支持批量的推理?
-
我在通过python examples/gradio_demo_chat.py --code_path=/mnt/tenant-home_speed/model/internlm-xcomposer2-4khd-7b/ --port 7804 运行internlm-xcomposer2-4khd-7b模型的时候出现以下报错,提示TypeError: Accordion.__init__() …
-
### What is the issue?
I created an Ollama model (for the fp16 GGUF) based on this: https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5
When testing one of my sample forms …