#
InternVL Family: Closing the Gap to Commercial Multimodal Models with Open-Source Suites โโ A Pioneering Open-Source Alternative to GPT-4o
[\[๐ Blog\]](https://internvl.github.io/blog/) [\[๐ค FAQs\]](https://internvl.readthedocs.io/en/latest/tutorials/faqs.html) [\[๐ InternVL2 Blog\]](https://internvl.github.io/blog/2024-07-02-InternVL-2.0/) [\[๐จ๏ธ Chat Demo\]](https://internvl.opengvlab.com/) [\[๐ค HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[๐ Document\]](https://internvl.readthedocs.io/en/latest/) [\[๐ API\]](https://internvl.readthedocs.io/en/latest/get_started/internvl_chat_api.html) [\[๐ Quick Start\]](#quick-start-with-huggingface)
[\[๐ฅ Mini-InternVL Report\]](https://arxiv.org/abs/2410.16261) [\[๐ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[๐ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238)
[\[๐ 2.0 ไธญๆ่งฃ่ฏป\]](https://zhuanlan.zhihu.com/p/706547971) [\[๐ 1.5 ไธญๆ่งฃ่ฏป\]](https://zhuanlan.zhihu.com/p/699439759) [\[๐ 1.0 ไธญๆ่งฃ่ฏป\]](https://zhuanlan.zhihu.com/p/702946079)
[Switch to the Chinese version (ๅๆข่ณไธญๆ็)](/README_zh.md)
![opencompass](https://github.com/user-attachments/assets/7ce93c05-84ae-4997-a480-53897d1d3a1c)
News ๐๐๐
2024/11/14
: We introduce MMPR, a high-quality, large-scale multimodal reasoning preference dataset, and MPO, an effective preference optimization algorithm. The resulting model, InternVL2-8B-MPO, achieves an accuracy of 67.0 on MathVista. Please refer to our paper, project page and document for more details.
2024/10/21
: We release the Mini-InternVL series. These models achieve impressive performance with minimal size: the 4B model achieves 90% of the performance with just 5% of the model size. For more details, please check our project page and document.
2024/08/01
: The Chartmimic team evaluated the InternVL2 series models on their benchmark. The InternVL2-26B and 76B models achieved the top two performances among open-source models, with the InternVL2 76B model surpassing GeminiProVision and exhibiting comparable results to Claude-3-opus.
2024/08/01
: InternVL2-Pro achieved the SOTA performance among open-source models on the CharXiv dataset, surpassing many closed-source models such as GPT-4V, Gemini 1.5 Flash, and Claude 3 Sonnet.
2024/07/24
: The MLVU team evaluated InternVL-1.5 on their benchmark. The average performance on the multiple-choice task was 50.4%, while the performance on the generative tasks was 4.02. The performance on the multiple-choice task ranked #1 among all open-source MLLMs.
2024/07/18
: ๐ฅ๐ฅ InternVL2-40B achieved SOTA performance among open-source models on the Video-MME dataset, scoring 61.2 when inputting 16 frames and 64.4 when inputting 32 frames. It significantly outperforms other open-source models and is the closest open-source model to GPT-4o mini.
2024/07/18
: ๐ฅ InternVL2-Pro achieved the SOTA performance on the DocVQA and InfoVQA benchmarks.
2024/07/04
: ๐ We release the InternVL2 series. InternVL2-Pro achieved a 62.0% accuracy on the MMMU benchmark, matching the performance of leading closed-source commercial models like GPT-4o. The free API of this model can be applied by filling (application form) / (็ณ่ฏท่กจ). Other models are available at HF link.
2024/06/19
: We propose Needle In A Multimodal Haystack (MM-NIAH), the first benchmark designed to systematically evaluate the capability of existing MLLMs to comprehend long multimodal documents.
2024/05/30
: We release ShareGPT-4o, a large-scale dataset that we plan to open-source with 200K images, 10K videos, and 10K audios with detailed descriptions.
2024/05/28
: Thanks to the lmdeploy team for providing AWQ quantization support. The 4-bit model is available at OpenGVLab/InternVL-Chat-V1-5-AWQ.
2024/05/13
: InternVL 1.0 can now be used as the text encoder for diffusion models to support multilingual generation natively in over 110 languages worldwide. See MuLan for more details.
2024/04/18
: InternVL-Chat-V1-5 has been released at HF link, approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc.
2024/02/27
: InternVL is accepted by CVPR 2024 (Oral)! ๐
2024/02/21
: InternVL-Chat-V1-2-Plus achieved SOTA performance on MathVista (59.9), MMBench (83.8), and MMVP (58.7). See our blog for more details.
2024/02/12
: InternVL-Chat-V1-2 has been released. It achieves 51.6 on MMMU val and 82.3 on MMBench test. For more details, please refer to our blog and SFT data. The model is now available on HuggingFace, and both training / evaluation data and scripts are open-sourced.
2024/01/24
: InternVL-Chat-V1-1 is released, it supports Chinese and has stronger OCR capability, see here.
2024/01/16
: We release our customized mmcv/mmsegmentation/mmdetection code, integrated with DeepSpeed, which can be used for training large-scale detection and segmentation models.
TODO List
- [x] Support liger kernels to save GPU memory
- [x] Release the code, model, and data of MPO
- [x] Support multimodal packed dataset
- [ ] Support vLLM and Ollama
- [ ] Support video and PDF input in online demo
- [ ] Release InternVL2 with VisionLLMv2 integration
- [x] Rebuild documents using readthedocs
- [x] Support fine-tuning different LLMs with LoRA
- [x] Release
requirements.txt
for InternVL2
- [x] Release training / evaluation code for InternVL2 series
- [x] Release Streamlit web UI for InternVL1.5 and InternVL2
Documents
-
Get Started
-
InternVL Family
Compared with SOTA VLLMs
Model Zoo
Multimodal Large Language Model (InternVL 2.0)
InternVL2-Pro API
We welcome everyone to use our API for research. For better management, please submit (application form) / (็ณ่ฏท่กจ) to obtain free API access.
Multimodal Large Language Model (InternVL 1.0-1.5)
Model |
Date |
HF Link |
MS Link |
Note |
Mini‑InternVL‑Chat‑4B‑V1‑5 |
2024.05.28 |
๐ค link |
๐ค link |
๐๐ 16% of the model size, 90% of the performance |
Mini‑InternVL‑Chat‑2B‑V1‑5 |
2024.05.19 |
๐ค link |
๐ค link |
๐ 8% of the model size, 80% of the performance |
InternVL‑Chat‑V1‑5 |
2024.04.18 |
๐ค link |
๐ค link |
support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. |
InternVL‑Chat‑V1‑2‑Plus |
2024.02.21 |
๐ค link |
๐ค link |
more SFT data and stronger |
InternVL‑Chat‑V1‑2 |
2024.02.11 |
๐ค link |
๐ค link |
scaling up LLM to 34B |
InternVL‑Chat‑V1‑1 |
2024.01.24 |
๐ค link |
๐ค link |
support Chinese and stronger OCR |
InternVL‑Chat‑19B |
2023.12.25 |
๐ค link |
๐ค link |
English multimodal dialogue |
InternVL‑Chat‑13B |
2023.12.25 |
๐ค link |
๐ค link |
English multimodal dialogue |
Vision Foundation Model (InternVL 1.0-1.5)
Model |
Date |
HF Link |
MS Link |
Note |
InternViT‑300M‑448px |
2024.05.25 |
๐ค link |
๐ค link |
distilled small vision foundation model with 300M parameters (๐ฅnew) |
InternViT‑6B‑448px‑V1‑5 |
2024.04.20 |
๐ค link |
๐ค link |
support dynamic resolution and super strong OCR feature extraction capability by incremental pre-training (๐ฅnew) |
InternViT‑6B‑448px‑V1‑2 |
2024.02.11 |
๐ค link |
๐ค link |
support 448 resolution by incremental pre-training |
InternViT‑6B‑448px‑V1‑0 |
2024.01.30 |
๐ค link |
๐ค link |
support 448 resolution by incremental pre-training |
InternViT‑6B‑224px |
2023.12.22 |
๐ค link |
๐ค link |
the first version of InternViT-6B, extracted from InternVLโ14Bโ224px |
Vision-Language Foundation Model (InternVL 1.0)
Model |
Date |
HF Link |
MS Link |
Note |
InternVL‑14B‑224px |
2023.12.22 |
๐ค link |
๐ค link |
vision-language foundation model, InternViT-6B + QLLaMA, can be used for image-text retrieval like CLIP |
What can InternVL do?
Visual Perception (click to expand)
- Linear-Probe Image Classification [\[see details\]](./classification#-evaluation)
ViT-22B uses the private JFT-3B dataset.
| method | #param | IN-1K | IN-ReaL | IN-V2 | IN-A | IN-R | IN-Sketch |
| ------------------- | :----: | :---: | :-----: | :---: | :--: | :--: | :-------: |
| OpenCLIP-G | 1.8B | 86.2 | 89.4 | 77.2 | 63.8 | 87.8 | 66.4 |
| DINOv2-g | 1.1B | 86.5 | 89.6 | 78.4 | 75.9 | 78.8 | 62.5 |
| EVA-01-CLIP-g | 1.1B | 86.5 | 89.3 | 77.4 | 70.5 | 87.7 | 63.1 |
| MAWS-ViT-6.5B | 6.5B | 87.8 | - | - | - | - | - |
| ViT-22B\* | 21.7B | 89.5 | 90.9 | 83.2 | 83.8 | 87.4 | - |
| InternViT-6B (ours) | 5.9B | 88.2 | 90.4 | 79.9 | 77.5 | 89.8 | 69.1 |
- Semantic Segmentation [\[see details\]](./segmentation#-evaluation)
| method | decoder | #param (train/total) | crop size | mIoU |
| --------------------- | :-----: | :------------------: | :-------: | ------------ |
| OpenCLIP-G (frozen) | Linear | 0.3M / 1.8B | 512 | 39.3 |
| ViT-22B (frozen) | Linear | 0.9M / 21.7B | 504 | 34.6 |
| InternViT-6B (frozen) | Linear | 0.5M / 5.9B | 504 | 47.2 (+12.6) |
| ViT-22B (frozen) | UperNet | 0.8B / 22.5B | 504 | 52.7 |
| InternViT-6B (frozen) | UperNet | 0.4B / 6.3B | 504 | 54.9 (+2.2) |
| ViT-22B | UperNet | 22.5B / 22.5B | 504 | 55.3 |
| InternViT-6B | UperNet | 6.3B / 6.3B | 504 | 58.9 (+3.6) |
- Zero-Shot Image Classification [\[see details\]](./clip_benchmark#imagenet-variants-and-objectnet)
| method | IN-1K | IN-A | IN-R | IN-V2 | IN-Sketch | ObjectNet |
| ----------------- | :---: | :--: | :--: | :---: | :-------: | :-------: |
| OpenCLIP-G | 80.1 | 69.3 | 92.1 | 73.6 | 68.9 | 73.0 |
| EVA-02-CLIP-E+ | 82.0 | 82.1 | 94.5 | 75.7 | 71.6 | 79.6 |
| ViT-22B\* | 85.9 | 90.1 | 96.0 | 80.9 | - | 87.6 |
| InternVL-C (ours) | 83.2 | 83.8 | 95.5 | 77.3 | 73.9 | 80.6 |
- Multilingual Zero-Shot Image Classification [\[see details\]](./clip_benchmark#multilingual-imagenet-1k)
EN: English, ZH: Chinese, JP: Japanese, Ar: Arabic, IT: Italian
| method | IN-1K (EN) | IN-1K (ZH) | IN-1K (JP) | IN-1K (AR) | IN-1K (IT) |
| ----------------- | :--------: | :--------: | :--------: | :--------: | :--------: |
| Taiyi-CLIP-ViT-H | - | 54.4 | - | - | - |
| WuKong-ViT-L-G | - | 57.5 | - | - | - |
| CN-CLIP-ViT-H | - | 59.6 | - | - | - |
| AltCLIP-ViT-L | 74.5 | 59.6 | - | - | - |
| EVA-02-CLIP-E+ | 82.0 | - | - | - | 41.2 |
| OpenCLIP-XLM-R-H | 77.0 | 55.7 | 53.1 | 37.0 | 56.8 |
| InternVL-C (ours) | 83.2 | 64.5 | 61.5 | 44.9 | 65.7 |
- Zero-Shot Video Classification
| method | #frame | K400 | K600 | K700 |
| ----------------- | :----: | :--: | :--: | :--: |
| OpenCLIP-G | 1 | 65.9 | 66.1 | 59.2 |
| EVA-02-CLIP-E+ | 1 | 69.8 | 69.3 | 63.4 |
| InternVL-C (ours) | 1 | 71.0 | 71.3 | 65.7 |
| ViCLIP | 8 | 75.7 | 73.5 | 66.4 |
| InternVL-C (ours) | 8 | 79.4 | 78.8 | 71.5 |
Cross-Modal Retrieval (click to expand)
- English Zero-Shot Image-Text Retrieval [\[see details\]](./clip_benchmark#flickr30k--coco)
model |
Flickr30K |
COCO |
avg |
image-to-text |
text-to-image |
image-to-text |
text-to-image |
R@1 |
R@5 |
R@10 |
R@1 |
R@5 |
R@10 |
R@1 |
R@5 |
R@10 |
R@1 |
R@5 |
R@10 |
OpenCLIP-G |
92.9 |
99.3 |
99.8 |
79.5 |
95.0 |
97.1 |
67.3 |
86.9 |
92.6 |
51.4 |
74.9 |
83.0 |
85.0 |
EVA-02-CLIP-E+ |
93.9 |
99.4 |
99.8 |
78.8 |
94.2 |
96.8 |
68.8 |
87.8 |
92.8 |
51.1 |
75.0 |
82.7 |
85.1 |
EVA-CLIP-8B |
95.6 |
99.6 |
99.9 |
80.8 |
95.5 |
97.6 |
70.3 |
89.3 |
93.9 |
53.0 |
76.0 |
83.4 |
86.2 |
InternVL-C (ours) |
94.7 |
99.6 |
99.9 |
81.7 |
96.0 |
98.2 |
70.6 |
89.0 |
93.5 |
54.1 |
77.3 |
84.6 |
86.6 |
InternVL-G (ours) |
95.7 |
99.7 |
99.9 |
85.0 |
97.0 |
98.6 |
74.9 |
91.3 |
95.2 |
58.6 |
81.3 |
88.0 |
88.8 |
- Chinese Zero-Shot Image-Text Retrieval [\[see details\]](./clip_benchmark#flickr30k-cn--coco-cn)
model |
Flickr30K-CN |
COCO-CN |
avg |
image-to-text |
text-to-image |
image-to-text |
text-to-image |
R@1 |
R@5 |
R@10 |
R@1 |
R@5 |
R@10 |
R@1 |
R@5 |
R@10 |
R@1 |
R@5 |
R@10 |
CN-CLIP-ViT-H |
81.6 |
97.5 |
98.8 |
71.2 |
91.4 |
95.5 |
63.0 |
86.6 |
92.9 |
69.2 |
89.9 |
96.1 |
86.1 |
OpenCLIP-XLM-R-H |
86.1 |
97.5 |
99.2 |
71.0 |
90.5 |
94.9 |
70.0 |
91.5 |
97.0 |
66.1 |
90.8 |
96.0 |
87.6 |
InternVL-C (ours) |
90.3 |
98.8 |
99.7 |
75.1 |
92.9 |
96.4 |
68.8 |
92.0 |
96.7 |
68.9 |
91.9 |
96.5 |
89.0 |
InternVL-G (ours) |
92.9 |
99.4 |
99.8 |
77.7 |
94.8 |
97.3 |
71.4 |
93.9 |
97.7 |
73.8 |
94.4 |
98.1 |
90.9 |
- Multilingual Zero-Shot Image-Text Retrieval on XTD [\[see details\]](./clip_benchmark#xtd)
| method | EN | ES | FR | ZH | IT | KO | RU | JP | average |
| ----------------- | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :-----: |
| AltCLIP | 95.4 | 94.1 | 92.9 | 95.1 | 94.2 | 94.4 | 91.8 | 91.7 | 93.7 |
| OpenCLIP-XLM-R-H | 97.3 | 96.1 | 94.5 | 94.7 | 96.0 | 90.2 | 93.9 | 94.0 | 94.6 |
| InternVL-C (ours) | 97.3 | 95.7 | 95.1 | 95.6 | 96.0 | 92.2 | 93.3 | 95.5 | 95.1 |
| InternVL-G (ours) | 98.6 | 97.7 | 96.5 | 96.7 | 96.9 | 95.1 | 94.8 | 96.1 | 96.6 |
Multimodal Dialogue
See ["Compared with SOTA VLLMs"](#compared-with-sota-vllms) section.
Quick Start with HuggingFace
using InternViT-6B for visual feature extraction (click to expand)
```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
model = AutoModel.from_pretrained(
'OpenGVLab/InternViT-6B-448px-V1-5',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image = Image.open('./examples/image1.jpg').convert('RGB')
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
outputs = model(pixel_values)
```
using InternVL-C(ontrastive) and InternVL-G(enerative) for cross-modal retrieval (click to expand)
```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
from transformers import AutoTokenizer
model = AutoModel.from_pretrained(
'OpenGVLab/InternVL-14B-224px',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternVL-14B-224px')
tokenizer = AutoTokenizer.from_pretrained(
'OpenGVLab/InternVL-14B-224px', use_fast=False, add_eos_token=True)
tokenizer.pad_token_id = 0 # set pad_token_id to 0
images = [
Image.open('./examples/image1.jpg').convert('RGB'),
Image.open('./examples/image2.jpg').convert('RGB'),
Image.open('./examples/image3.jpg').convert('RGB')
]
prefix = 'summarize:'
texts = [
prefix + 'a photo of a red panda', # English
prefix + 'ไธๅผ ็็ซ็็
ง็', # Chinese
prefix + 'ไบๅนใฎ็ซใฎๅ็' # Japanese
]
pixel_values = image_processor(images=images, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
input_ids = tokenizer(texts, return_tensors='pt', max_length=80,
truncation=True, padding='max_length').input_ids.cuda()
# InternVL-C
logits_per_image, logits_per_text = model(
image=pixel_values, text=input_ids, mode='InternVL-C')
probs = logits_per_image.softmax(dim=-1)
# tensor([[9.9609e-01, 5.2185e-03, 6.0070e-08],
# [2.2949e-02, 9.7656e-01, 5.9903e-06],
# [3.2932e-06, 7.4863e-05, 1.0000e+00]], device='cuda:0',
# dtype=torch.bfloat16, grad_fn=)
# InternVL-G
logits_per_image, logits_per_text = model(
image=pixel_values, text=input_ids, mode='InternVL-G')
probs = logits_per_image.softmax(dim=-1)
# tensor([[9.9609e-01, 3.1738e-03, 3.6322e-08],
# [8.6060e-03, 9.9219e-01, 2.8759e-06],
# [1.7583e-06, 3.1233e-05, 1.0000e+00]], device='cuda:0',
# dtype=torch.bfloat16, grad_fn=)
# please set add_eos_token to False for generation
tokenizer.add_eos_token = False
image = Image.open('./examples/image1.jpg').convert('RGB')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
tokenized = tokenizer("English caption:", return_tensors='pt')
pred = model.generate(
pixel_values=pixel_values,
input_ids=tokenized.input_ids.cuda(),
attention_mask=tokenized.attention_mask.cuda(),
num_beams=5,
min_new_tokens=8,
)
caption = tokenizer.decode(pred[0].cpu(), skip_special_tokens=True).strip()
# English caption: a red panda sitting on top of a wooden platform
```
using InternVL-Chat for multimodal chat (click to expand)
Here, we take the smaller `OpenGVLab/InternVL2-8B` as an example:
```python
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
# If you have an 80G A100 GPU, you can put the entire model on a single GPU.
# Otherwise, you need to load a model using multiple GPUs, please refer to the `Multiple GPUs` section.
path = 'OpenGVLab/InternVL2-8B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=False)
# pure-text conversation (็บฏๆๆฌๅฏน่ฏ)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (ๅๅพๅ่ฝฎๅฏน่ฏ)
question = '\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (ๅๅพๅค่ฝฎๅฏน่ฏ)
question = '\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (ๅคๅพๅค่ฝฎๅฏน่ฏ๏ผๆผๆฅๅพๅ)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (ๅคๅพๅค่ฝฎๅฏน่ฏ๏ผ็ฌ็ซๅพๅ)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: \nImage-2: \nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (ๅๅพๆนๅค็)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (่ง้ขๅค่ฝฎๅฏน่ฏ)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: \n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: \nFrame2: \n...\nFrame8: \n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail. Don\'t repeat.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
```
License
This project is released under the MIT license. Parts of this project contain code and models from other sources, which are subject to their respective licenses.
Citation
If you find this project useful in your research, please consider cite:
@article{chen2023internvl,
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2312.14238},
year={2023}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@article{gao2024mini,
title={Mini-InternVL: A Flexible-Transfer Pocket Multimodal Model with 5\% Parameters and 90\% Performance},
author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
journal={arXiv preprint arXiv:2410.16261},
year={2024}
}
@article{wang2024mpo,
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2411.10442},
year={2024}
}
Acknowledgement
InternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!
If you want to join our WeChat group, please scan the following QR Code to add our assistant as a Wechat friend: