-
oneapi 2024.0, Ubuntu22.04, A770
```
#Model Download
from modelscope import snapshot_download
model_dir = snapshot_download('OpenBMB/MiniCPM-Llama3-V-2_5')
import torch
from PIL import Image…
-
### Model description
MiniCPM-V is a series of Openbmb's vision language models.
We want to add support for MiniCPM-V-2 and later models
### Open source status
- [x] The model implementation is av…
-
### Describe the bug
Due to network restrictions, I cannot use Xinference to pull models online. I downloaded the model weight of cogvlm2-llama3-chinese-chat-19B to the local computer, and then used …
-
Hi,
I'm trying to constrain the generation of my VLMs using this repo; however i can't figure out the way to personalize the pipeline for handling inputs (query+image). Whereas it is documented as …
-
Hello, I am trying to find the training code, but it seems like there is just inference code.
Can you please point to the training code?
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
can you use grok api instead of open ai ?
kaly9 updated
3 months ago
-
Thank you for your efforts on this project! I'm excited to get it running properly.
The LLM text generation seems to work fine, but when I try to use the vision tab I get the below error. Once this…
CCpt5 updated
2 weeks ago
-
When i trained llava-llama3 use your code, the log print tokenization mismatch as below.
how to fix it?
thanks!
WARNING: tokenization mismatch: 55 vs. 54. (ignored)
WARNING: tokenization m…