-
Thank you for open sourcing the data and code for UReader. I used `scripts/train_it_v100.sh` to train UReader. However, I was unable to reproduce the benchmark results.
Pretrained checkpoint: [MAGA…
-
您好,我看论文里提到Our dataset, code, model, and evaluation set are available at https://github.com/X-PLUG/Youku-mPLUG,但是我找了半天始终没找到,是还没有开源吗?
-
Can I finetune_lora on 4-A100-40g?
which parameter should I set
-
I am getting this error:
ValueError: Attempted to load model 'llava_hf', but no model for this name found! Supported model names: llava, qwen_vl, fuyu, batch_gpt4, gpt4v, instructblip, minicpm_v, c…
-
-
### Question
I met this error when finetuning the model, and the environment configuration was based on the official environment.
```python
Loading checkpoint shards: 100%|██████████| 33/33 [02:49
-
When the mPLUG-2 model can be released?
-
What are the detailed differences between Instruction tuning (LoRA) and Instruction tuning (FT) ?
If I want to finetune based on your checkpoint with lora,which one should I use?[mplug-owl-llama-7b…
-
## タイトル: mPLUG-DocOwl2: OCR不要の複数ページ文書理解のための高解像度圧縮技術
## リンク: https://arxiv.org/abs/2409.03420
## 概要:
マルチモーダル大規模言語モデル(MLLM)は、ドキュメント画像のサポート解像度を高めることで、OCR不要のドキュメント理解において有望な性能を実現してきました。しかし、これは1つのドキュメント…
-
To save GPU memory, I want to load the multilingual model in 4bit mode, the code is as follows.
```python
import torch
from transformers import AutoTokenizer
from mplug_owl.modeling_mplug_owl impo…