-
Does Minicpmv2.6 currently support int8/fp8 quantization?
thanks~
-
CUDA_VISIBLE_DEVICES=6,7 torchrun --nproc_per_node 2 \
-m FlagEmbedding.llm_reranker.finetune_for_layerwise.run \
--output_dir ./results/reranker/bge-reranker-v2-minicpm-layerwise \
--model_name_or…
-
## タイトル: MiniCPM-V: あなたのスマートフォンで動作する、GPT-4V レベルのマルチモーダル大規模言語モデル。
## リンク: https://arxiv.org/abs/2408.01800
## 概要:
近年のマルチモーダル大規模言語モデル(MLLM)の急激な進歩は、AI研究と産業の状況を根本的に変え、次世代AIの画期的な進展への道を切り開いています。しかし、MLLM…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
請問在訓練MiniCPM-Llama3-V 2.5時,透過以下設定只微調視覺模型,而不全量或LoRA微調LLM所訓練後的權重要如何使用呢?
--tune_vision true
--tune_llm false
--use_lora false
嘗試直接載入模型會出現下方錯誤
`AttributeError: 'MiniCPMVTokenizerFast' object has …
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
Whereas /v1/chat/completions succeeds , the same body /v1/embeddings returns a 404 for a similar body
I was hoping to get the embedding output vector for an image that uses the openbmb/MiniCPM-V-2…
-
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how …
-
Hi, given that now RKNN-LLM supports embedding, I tried with different models , but no one works with embeddings.
Can you point me out a model that supports embedding and is compatible with RKLLM?
…
-
VisRAG-Gen (MiniCPM-V-2_6) load success!
Enter your query: 怀孕如何系安全带?
Enter the number of documents to retrieve: 500
Special tokens have been added in the vocabulary, make sure the associated word e…